Wednesday, December 14, 2016

Thoughts on Post-2016 Election America: Re-examining the "Fringe Fighting" Hypothesis

In my conversations with people (both online and face-to-face conversations) about the post-election media environment, I'm finding it increasingly difficult to maintain my position as a dispassionate optimist. Is this because the world itself is contradicting that position, or is it because I'm being met with more resistance from those around me? That's what I'm still trying to sort out.

Many of the conversations come back to the premise that America is somehow more hostile than it used to be (not just that our leaders are objectionable and/or dangerous, but that the increasing danger resides in our populous). There are also conversations about what particular politicians are doing, will do, or can do, but I want to set those aside for a moment and focus on premises relating to the American population and the extent and intensity of its hostility toward one another. Previously, I've argued that the impression that we're a nation divided is largely an illusion, that the true conflict is mainly at the fringes, but that was before the election. So, I'd like to revisit that argument in light of discussions of public opinion, fake news, and a general sense of threat.

Essentially, my argument was that the strong disagreement we see in our culture is relegated to small groups of individuals on either end of an ideological spectrum that manifests themselves in highly visible ways. Although it can appear as though our entire culture is in a state of unrest (and is getting worse in this respect), this may be an illusion. To paraphrase myself:

This illusion occurs when we mistake uncommon, extreme online behaviors for exemplars. We implicitly or explicitly link hundreds or thousands of people actually stating a belief online (or, in this case, acting hostile toward other Americans online) with the behaviors and beliefs of a larger mainstream group that, while not actually stating the belief, has stated or acted in such ways that makes it clear that they believe in some of the same things as the group that actually states the belief online. In the U.S. right now, the large groups to which we most often extrapolate are "liberals/Democrats" and "conservatives/Republicans." Dissimilarities between ideas held by the small group actually stating the belief (or actually being openly hostile) online and members of the large group who are not stating the belief (and are not actually being openly hostile online) are ignored in favor of whatever they have in common. This is justified on the grounds that what the small group and the large group have in common is thought to represent a shared, coherent ideological framework (see "Arguing with the Fringes" for further details).

In retrospect, I shouldn't have used the word "fringe" to describe these small groups. The word feels dismissive and judgmental, which is not what I intended. Really, I just want to make a statement about the size of the groups that are in strong disagreement with (and are hostile toward) other Americans. Still, the term "fringe fighting" has a certain ring to it, and I can't think of a suitable alternative word for these groups at the moment, so for the purposes of this post, I'll stick with "fringe."

Arguments/evidence for the Fringe Fighting hypothesis

Though there is more talk about social unrest than there was when I wrote "Arguing with the Fringes," this talk fits a "moral panic" narrative in which people become extremely alarmed over novel behavior that is rapidly becoming popular (often involving media use) and extrapolate to a future world in which the novel behavior radically changes our world for the worse. There are, of course, concerns about rapidly spreading novel behaviors that turn out to be justified, and the dismissal of such concerns as hysterical can have dire consequences. But there are also dire consequences to succumbing to overblown fears, namely rapid declines in interpersonal and institutional trust that are essential to functioning societies, in addition to the "boy who cried 'wolf'" problem (if one's concerns are found by others to be overblown, one loses credibility, forfeiting the ability to call others' attention to future threats). Given the similarities between the talk of social unrest and previous instances of moral panics, it at least seems worthwhile to consider the possibility that concern about Americans' hostility toward one another is a moral panic.

It is also important to ask, "What are we using as indicators of how 320 Million or so Americans think of feel?" How Americans voted and what they say on the internet seem to be commonly used indicators. The majority of Americans did not vote in the last election, so it would be difficult to use 2016 voting behavior to assume anything about how "America" feels about anything. For those who did vote, whom they voted for is a pretty weak signal of any particular belief, as these candidates, in effect, bundle together various disparate beliefs, and some votes are not intended as endorsements of anything the candidate stated or believed but instead are merely "protest votes."

What people say on the internet is also a weak signal of overall public opinion. For one thing, comparatively few people post about politics and social issues (roughly one-third of social media users, according to Pew). And many of those are posting information that is visible only to those in their immediate social circles (e.g., posting on Facebook). Such information is highly salient to individuals consciously or unconsciously forming beliefs about what other Americans believe, but it is hardly representative of Americans as a whole.

We may also question assumptions about the impact the hostility we're able to see is having. The extreme voices may have been largely filtered out because most of their friends unfollowed or hid them. The only people who don't filter out the extreme voices are the ones who already would have believed whatever the poster is trying to convince them of. What good is sharing a news story if very few people follow you, and those few people already knew about the news you're sharing? As a side note, it would be nice to have some information about actual audience and the practice of unfollowing to go along with the information about sharing and 'liking' information online.

Better evidence about what America, as a whole, believes can be found in the General Social Survey, which attempts to look at what ALL Americans believe rather than the few that contribute content to the internet. Data from the survey suggests that American public opinion on a variety of social issues is relatively stable over the past few decades; an abrupt shift in that, though not impossible, would seem unlikely.

Finally, there is evidence that the growth of political animosity in the U.S. is a trend that pre-dates social media, so perhaps social media is just making visible what was already there. It should be noted that animosity (and attitude) is not the same thing as hostility (a behavior).

Arguments/evidence against the Fringe Fighting hypothesis

There is some evidence of growing distrust of the media. If you're not getting your information from the media (or whatever you define "mainstream media" as), where are you getting it from? You could either get it directly from alternative news sources or get it via social media, which carries stories from those alternative news sources. Many existing measures of exposure to news have yet to catch up with the way we consume news. It is entirely possible, given the growing distrust in mass/mainstream media and the lack of good indicators about where Americans get their information, that Americans have quickly shifted toward consuming news stories that frame current events chielfy in terms of conflict between groups of Americans.

What does this have to do with hostility? Well, those intra-American-conflict news stories could play to whatever various groups are inclined to believe about various other groups, play to one's fears (climate change on the Left; undesirable social change on the Right; an unfair, "rigged" economy and government on both the Left and Right), decrease trust and empathy, increase fear and cynicism, and sew dissent. Very few people may be initially fighting with one another, but if those people can disseminate information so as to convince disinterested others that the people on the Other Side of the fight are a growing threat to everyone, they can effectively "enlist" those disinterested others in the fight. What starts as a fringe fight could quickly grow into something larger.

So What?

For a moment, let's assume that there is a problem, that a significant number of people in America disagree with one another to a significant degree. Is this necessarily bad? Disagreement isn't bad in and of itself; arguably, it's desirable so as to avoid the pitfalls of "groupthink." But strong disagreement could lead to incivility (which, some would say, is antithetical to empathy and compromise, and compromise seems like a prerequisite to having a functioning democracy and economy) and censorship (which is antithetical to democracy and the progress of science and education). Incivility could lead to violent attacks (though I've heard at least one scholar argue that arguments can be uncivil and not be bad in these senses). In so far as we see evidence of strong disagreement growing and/or leading to incivility, censorship, and violent attacks, then yes, it's bad.

Assuming we have a problem, what do we do about it? It's possible that the traditional channels by which we sought to address social unrest would no longer work within a decentralized, non-hierarchical information system like today's (or tomorrow's) internet. This is the "everyone finds their own facts" problem (a topic for an upcoming blog entry). Even if you engage in a dispassionate analysis of evidence and find support for the fringe fighting hypothesis, or evidence that people are consuming more and more biased information and wandering further from any objective truth, what are you going to do with that information? You might teach it in a class, but if people don't want to hear it, then people might just start distrusting teachers more. You might publish in an academic journal, but what good is that when the journal loses it's sense of authority and credibility? If you publish in that academic journal and your research is covered by The New York Times, what good is that if fewer and fewer people trust The New York Times?

I'm left with a desire (quixotic as it may be) to try to step outside the problem. I know that many are fond of casting Intra-American conflict (online and offline) as a part of global phenomenon, but here again, I think we're making a facile analogy, choosing to see the similarities and ignoring the many differences. Surely, not every country is experiencing precisely the same kind of online conflict problem that Americans are experiencing. I was reminded of this point while attending this year's Association of Internet Researchers annual conference in Berlin, where I was fortunate enough to present on a panel with researchers from Israel, Denmark, and the U.K. I was left with the notion that not all online discussion forums are the same with regard to conflict, that intra-group conflict is not inevitable in the digital era.

We might also step outside of the internet for a moment. Anecdotally, I've observed a few folks taking a break from social media and news media because the emotional pitch of online discourse became so shrill as to be unbearable. I'm reminded of an idea put forth in Joshua Rothman's book review in the New Yorker (as well as the book he was reviewing, I assume) that our face-to-face interactions with individuals and our feelings about the political groups to which those individuals belong are often in conflict. Short version: we love (or at least tolerate) our neighbors but we hate the political groups to which they belong. The basic idea is that it is harder to hate a person in person. It will be important to see how our face-to-face interactions at work, school, family, and in public places progresses along side our perceptions of behavior online.

So, where to go from here? For starters, it seems worthwhile to examine the framing of news about current events: do we really see an uptick in exposure to intra-American conflict framing, or are our filter bubbles fooling us into think this? It's also important to understand more about the contexts in which online hostility occurs (a goal of my current research project examining hostile behavior on Reddit) and when and where this is associated with offline hostility.

Sunday, November 06, 2016

Nostalgorithms

Nostalgia is a feeling, to start with. We have songs and photographs we happen upon that conjure nostalgia. We have articulations of nostalgia, in poetry, in the lyrics of songs, in films, TV shows, novels. And now, we have algorithms that serve up content (songs, photographs, news) that make us feel nostalgic.

The “Your memories on Facebook” function fascinates me, as a potential mechanism for conjuring nostalgia. It's hard to know precisely how the algorithm works - how and when it decides to bring a photo up from the past and ask you whether or not you'd like to share this photo again on your timeline - but it appears to bring up pictures from the same day of the year in previous years, most likely at least 2 years from the past. It also is likely that photos/memories are chosen based on the amount of "likes" or comments they received at the time. 

That’s certainly one of the simplest approaches, and it works well enough, but perhaps not as well as it could work. I’ve only really heard people talk about this aspect of the Facebook experience (as is the case with many aspects of any kind of technology) when it doesn’t work. People make note of the times when Facebook served up a picture on an ex or, worse, a deceased love one. It’s clear that it doesn’t work perfectly, and yet it works well enough to persist. 

Does that algorithm learn from the times it presents unpleasant memories to users? Probably. Perhaps it starts by serving up memories, allowing a certain period for the memories to “steep," and, after a period of trial and error, it would be possible to identify certain types of memories that people elected to share. These types would be defined by objective qualities the shared memories had in common, qualities that set them apart from the non-shared memories. The algorithm is “dumb” in the sense that it doesn’t know anything about the concept of nostalgia, or the individual users’ lives, or about human emotion in general. But if you give it enough data, enough pictures, enough memories, it will probably get better at serving up pictures that you want to share, pictures that tap into something that we would call nostalgia. It learns not to serve up those pictures of your ex.

Perhaps there's an unseen pattern or signature to nostalgia that could be revealed by the algorithm. It's not just a matter of how much time has passed that makes us nostalgic for something. It has to do with the specific contours of social relations and feelings, all of which leave an imperfect imprint in our social media archives (less and less imperfect as more and more of our social/emotion lives are channeled through social media). 

Here's an example pattern using data that a social media company like Facebook could collect: Optimal nostalgia resides in the pictures with that person you appeared with in other pictures and exchanged frequent IMs with for a period of three years after which there were fewer and fewer pictures of the two of you together and fewer IMs until the trail went cold, but you were still "liking" and occasionally commenting on their posts, though this wasn't reciprocated, suggesting a kind of unreciprocated longing for re-connection. Or maybe it takes into account the time of day at which it was posted (maybe people are more nostalgic about things that happened at night) or the place (maybe nostalgia clings to certain places more than others, or it requires a certain physical distance from our current locations, at least 1,000 miles). Maybe it's all there, residing in the metadata. 

I think about nostalgia in terms of music, too. Pop music (and movies/TV shows that use pop music) have worked with a crude version of the nostalgia principle for decades, if not centuries. Artists arrange a song in a familiar way, or include a certain familiar phrase or melody, so as to strike a particular emotional chord in the listener. Genres are revived in part out of nostalgia. But algorithms could give us something much more fine-grained, more personalized. Imagine that your entire music listening history was archived (as will be the case for people starting to listen to music in the age of streaming services like Spotify, Pandora, or YouTube). The program would know that you really loved a particular song (you played it 100 times that one week in 2010) but then seem to have forgotten about it (you haven't played it since). One of life's great pleasures is hearing that song you loved but have not heard in years. Part of you knows the rhythm and the lyrics, but another part of you has forgotten them. Your ability to sing along with the first verse feels instinctual, but you can't remember exactly what the chorus was until it comes crashing in, and you think, "how could I have forgotten this?" 

Maybe the music program is integrated with your preferred social media app. The social media app has a rough indication of your mood and what's going on in your life. It can make a pretty good guess as to when you're ready for an upbeat song and when you're ready for something more introspective. Maybe it knows that you found a love song when you weren't in love and seemed to like it but couldn't listen to it too frequently because you weren't in love. And now that it knows you're in love, you're ready to hear it again. Maybe it knows that the lyrics to another song will be more poignant to you now that you're 40. 

There is a visceral revulsion at technology colonizing overly-personal or artistic realms of human experience. All fine and well if the algorithms make shopping more efficient, but nostalgia? Memories, and our experiences of them, are tied to identity. This may account for the way in which we need nostalgia triggers to feel serendipitous. The idea of an algorithm writing poetry is a bit unsettling, but what about an algorithm that can conjure the feeling that inspires poets in the first place?


Saturday, August 27, 2016

The Power of the (online) Court of Public Opinion

I've been trying to keep track of instances in which some individual or organization makes a decision or takes an action that others consider to be unjust, leading those others to decry the decision or action online and to take some sort of action that, ultimately, constitutes a kind of judgment and/or punishment of the individual or organization. Here are a few examples:

1) A judge made a decision that certain members of the public believed to be unjust. Those members mobilized online and demanded the removal of the judge from his position. In one sense, it didn't achieve it's intended effect (the judge wasn't removed) but in another sense, it has: the judge decided not to preside over cases involving sexual assault and then decided not to preside over any kind of criminal cases. The cause of his decision seems to be the "distraction" created by the online protesters. If the case hadn't risen to a certain level of prominence, if the protesters hadn't been so vocal, then it seems reasonable to conclude that the judge would have gone on presiding over criminal cases, including cases involving sexual assault.

2) A pharmaceutical company made a decision that certain members of the public felt was unjust. Those members decried the decision online and, in response to this, members of the U.S. Congress are discussing ways in which they might change regulation of the pharmaceutical industry. If those members of the public were not as vocal, it seems unlikely that Congress would have taken up the issue. It's too early to say whether this attempt to change the ways in which companies are regulate will succeed in any way, but it seems closer to succeeding before the online outcry.

3) On my Facebook feed, I came across a post from a friend which featured a picture of an individual riding the subway and a description that explained the ways in which he sexually assaulted and/or harassed other individuals on the subway. The friend was re-posting it: she had no first-hand experience with the individual that I know of and it was unclear (as it is so many times with re-posts or "shares" on Facebook) whether the experience was second-hand, third-hand, fourth-hand, etc. Again, it's hard to know what the impact of this post will be: will people who would have otherwise been victims recognize this man and avoid him? Will people recognize him and report him, or shun him or verbally assault him? All outcomes seem plausible.

These instances seem to be getting more numerous and they prompt me to think about how justice is meted out in the online court of public opinion and how the way that happens changes the relationship of power and justice.

How do people respond to someone they feel has done something unjust but is not being published by the traditional form of meting out justice (law enforcement; legal system)?

1) A group of people on the internet harasses someone and/or makes it possible for others to harass the person online and offline.

2) A group of people on the internet ruins someone's (either a person's or a corporation's) reputation. It's hard to say how permanent the reputation-ruining really is. I think it probably varies a lot from individual to individual, but we treat it as a given that from now until the end of time, now that you've been defamed online, you will be unable to get a job, will be unable to date, will have to move, will have to change your name, etc. It's easy to think of examples of permanent reputation damage (The New York Times Magazine had an excellent piece last year about how this happens to folks who post hurtful tweets), but I'm quite certain there are instances of temporary reputation damage that we're forgetting. And that's my point: we can't just rely on memory to evaluate the impact of online reputation-ruining because we won't remember the instances in which reputation damage was temporary.

It's also hard to say at what point a person's reputation really becomes ruined. What is the difference between a large number of people expressing their displeasure at your actions and reputation-ruining? Consider a case in which a person has done something that ticked off many people online. Those people have written a lot of bad things about the person. Those bad things only have an impact if the people reading them take them at face value. But this must vary. The impact of those bad things likely depends on how many good things may counter-balance the bad things. It also depends on how reputable the sources of the bad things are (are we talking The New York Times or someone's blog, a blog that pretty obviously has a bias to it or is the work of someone with a personal grudge?). It also depends on the reader's goal (are they thinking of hiring a person? Considering dating them? Just randomly curious about them?).

Then there's another factor which really interests me: a kind of general savvy-ness on the part of the reader about what he/she/they read about anyone online. It seems likely that in the early history of Google, blogs, and social media, the average internet user would be inclined to believe what they read online regardless of it's source (maybe this habit carried over from the era of mainstream information in which readers assumed some baseline level of veracity because of the gate-keeping function of mainstream sources and the extent to which they were accountable for publishing untruths because they were trying to protect their public reputation). My bet is that as time goes on and people run into more and more inaccurate, biased, or misleading information from non-reputable sources like social media posts and blogs, they will learn to discount information from these sources. If this is the case, a disparaging social media post in 2007 (assuming a non-savvy reading public) would have a far greater impact on the subject's reputation than a disparaging social media post in 2016 (assuming a somewhat more savvy, skeptical reading public). The savvy-ness and skepticism of the reading public must be taken into account when considering the actual impact of online disparagement on one's reputation.

3) A group of people on the internet provide enough pressure to make the person (or corporation) change their behavior. You don't have to engage in any kind of harassment or reputation-ruining to have this effect. Also, I wouldn't really call it a punishment, but it is a kind of judgment. The person or company at the center of it may just make a kind of calculation: "would I rather persist in my unpopular behavior now that it is known to so many people and unpopular among so many people, or should I change my behavior?" They often make the perfectly understandable decision to alter their behavior just so that they can get on with their lives. They don't have a moral high ground on which they can claim that they were being harassed or defamed. A lot of people didn't like what they were doing and publicly expressed this displeasure (which they are entitled to do) and this made life tough for them, so they changed their behavior.

The traditional justice system has flaws: it's slow and sometimes it gets things wrong. The court of public opinion has flaws: people's emotion and the extent to which an opinion is shared by others who are similar to them shape their reasoning. The court of public opinion also gets things wrong, but in a different way. Whereas traditional, established power structures and hierarchies often bias traditional justice systems, emotion and ingroup/outgroup tribalism bias the court of public opinion.

The court of public opinion has a certain appeal to it. It feels more democratic than the justice system. It feels like the people have the power while the justice system feels like (often un-elected) elites have the power. There's the sense that the court of public opinion compensates for the failings of the traditional justice system.

When thinking about any online phenomenon, I always like to try to answer the question, "is this really all that new?" or "what is it, specifically, that is new about it?"

I'm no historian of justice, but I'm pretty sure that the court of public opinion is not new at all. There has always been this kind of shadow justice system. You could do real violence to someone reputation in a small village if you and some other folks disapproved of what someone was saying or doing. My sense is that the group of people doing the judging and punishing are different online than they would be offline. In the offline court of public opinion, you're tried by members of your community. In the online court of public opinion, you're tried by groups of people who a) have internet access, b) have the time and motivation to read about and post about matters of justice.

This leads me to ask a couple of questions (maybe this is the beginning of yet another research agenda!): First, who are those people posting about matters of justice? How many of them are there? What are their beliefs? Where do they come from? My hunch is that public opinion relating to matters of justice as it manifests itself online is really the opinion of a relatively small (10%?) chunk of the public that posts about events happening all over the world (or at least in their country), and that it's, on average, younger and wealthier than the average citizen. It'll be tough to know much about these folks because so much posting is anonymous or pseudononymous, but who knows, we might be able to at least start to put together some answers.

Second, I'm really curious about that "reader savvy-ness" variable. We tend to focus on those posting online, but what about those reading those posts. There might be a certain understanding that develops on the part of the reader, a certain heuristic for identifying sources of more biased, more emotional information (Twitter) and less biased, less emotional information (Wikipedia). Information is curated on Twitter and Wikipedia in different ways: it's not just one big homogeneous internet, and it's hard to believe people treat it as such. Maybe lots of people already use heuristics like this, maybe not. That's why we do the research.

Tuesday, July 12, 2016

What Does Everyday Media Use Look Like? (and why this might bias our perception of digital media's effects)

Two brief anecdotes:

1: I’m sitting in Railroad Park in Birmingham, enjoying a breezy, temperate Saturday morning. A few folks pose for a photo that someone else takes with a smartphone. I guess I notice these people first in part because I cannot take a picture with my smartphone - the camera in it is broken. There is nothing like being robbed of something that you have taken for granted to get you to notice it more and to think about it more. I see a few other people in the park typing messages on their smartphones. Other people talk to one another face-to-face. Others stand around and take in the landscape, pet other people’s dogs, read a magazine or a book. And here I sit, typing on a laptop. What do we look like to each other? What kinds of assumptions might we be making about each other based on what we're doing in a public place with or without media technologies?

2: I'm enjoying a beer and a musical performance at Band of Brothers, a new brewery here in Tuscaloosa. The beer, the music, and the general vibe are all great. At the end of the performance, the lead singer, an elegant, attractive, charismatic young woman sits down at a table near the stage, gets out her smartphone, and seems to instantly transform herself into a zombie, hunched over a tiny screen that illuminates her dead-eyed stare.

It isn't hard to find people who are concerned about excessive media technology use (in particular excessive smartphone use). The most popular books about digital media use and the most popular articles take a pretty dim view of it. If there was a utopic moment in the history of such technologies, that moment seems to have passed. This recent commercial for Cisco feels like a relic of a time before we became so deeply suspicious of digital media (perhaps it's a response to that suspicion. Why else would we need a "pep talk" about the virtues of technology?). 

Where do we get these ideas about media use? Or: why are we so keen to agree with, and so reluctant to question, those who provide anecdotal evidence of its ill effects?

Often times, we are heavily influenced by our direct observations: we look at the people around us, the students in our classes, the family members at our cookouts, the friends at parties, the kids at our friends' houses, and the people in the park. Direct observation informs our intuition, and we seek out confirmation of that intuition in the books, blogs, articles, and documentaries we consume.

I've been thinking about how the relationship between our typical first-hand observations of media use and the actual fact of media use (regardless of whether or not it is observed by others) has changed over the years, and how this might influence our opinions of media use and its effects and may, in part, account for the dystopic view that seems to be dominating public discourse on the topic. 

One change: media use has become more public, and hence more visible. People watched a lot of TV before, teens used instant messenger a lot before, but they engaged in these activities mostly in their own homes. The ascendant variety of media use, mobile media use (primarily smartphones), is much easier to observe than TV use or AIM use on a home computer. Thus, we may think that there is a lot more media use going on than there was before, but this may be distorted by the fact that it is simply more easily observed. Evidence suggests that average screen time is increasing, but by the way people talk about digital media, you would think we spent very little time staring at screens before the smartphone, or that smartphones have doubled the amount of time people spend looking at screens (the real change in average screen time among Americans over the past 4 years is probably around 7%).

There are, of course, older media that are used in public: books, newspapers, and magazines. But there is an important difference between print media and smartphones: in the case of print media, it is easy for observers to know precisely what the print media user is reading while it is difficult for observers to tell what users are doing with mobile media. Observers cannot see the screens of smartphone or tablet users; even when observers physically can see the screens, there is an expectation that they do not look too closely at them. So, observers know that the other person is using media but they don’t know how they are using it: work email, liberal news, conservative news, nearby places to eat, connecting with family members on social media, bragging on social media, looking up how to build a bomb, how to build a raised-bed garden, etc.

What they are doing with media really matters. If they’re being social and supporting others while staring at the plastic rectangles in their hands, then this is very different than obsessing over how many people like what they've just posted on social media, and both these things can be done on the same website or application (e.g., Snapchat, Facebook, Reddit). There is such a broad range of activities in which a public media user could be engaged. My sense is that an anxiety arises simply because observers know so little about others that they share space with.

Then there's the question of what people would be doing if they weren’t using their phones. How do the particular activities in which they are engaged on their smartphones stack up to those other possibilities? If they weren't engaged in those media use activities, would they be talking to other strangers face-to-face? Would they be reading a magazine? Would they have spent time in deep, productive contemplation? Would they have stood there and dwelt on a mistake they made the day before? Would they have stayed home and watched TV? I ask these questions not in a rhetorical sense, to assert that the fear and resentment so many feel about media use is exaggerated. I raise them because I honestly don’t know, and that I believe that one cannot honestly say whether the fears about increased screen time and smart phones are justified or not without evidence that speaks to these questions. 

What about witnessing the media use not of strangers, but of someone more familiar to us: a friend, spouse, parent? Our observations of strangers are largely free of interpersonal influences; whether two people at the park talk to one another or ignore one another and stare at their smartphones doesn't directly affect our interpersonal relationships with them or anyone else. It's simply a snapshot of behavior in our society. When we witness the media use of someone with whom we have some sort of relationship, there is an emotional component to our judgment of their use. Commonly, we compare their media use to one particular alternative: having a good conversation with us.

This view ignores other alternatives. If they didn't have a smartphone or laptop, perhaps they would have elected to watch television, or would have elected to leave the room and call a friend on a landline phone in the other room, or would have read a book, newspaper, or magazine. Perhaps we would have had all of their attention instead of some of it, but perhaps we would have had none of it. We only see that they are not talking to us, which is something we wouldn't see and be reminded of if they were not in the room with us.

Compare the experience of sharing a room with someone you know while that person uses a smartphone to the experience of watching TV with someone you know. In the case of the TV-watching friend, we know what our friend is watching; we're watching it, too. In the case of the digital media user, we're likely to fear the worst when we can't see what the other person is doing on their smartphone or laptop, and the fear (of not really knowing this person and what they're up to) likely has a greater impact on us when it is a close friend, spouse, child, or parent. TV can spark conversation, but then again, so can smartphone use. During my casual observations of smartphone use at bars and coffee shops, I've noticed frequent "screen sharing" behavior in which phone use serves as the impetus for conversation rather than an alternative to it. I've also participated in such "phone-aided" conversations at home with my wife.

When we see other people using smartphones and laptops, we feel ignored. We often compare the situation to ideal alternatives rather than making the effort to determine what the likely alternatives might be. We don't think about what the person is doing with the media (often because the expectation of privacy prevents us from knowing this). When we see the elegant, charismatic performer transformed into a hunched-back zombie, we feel a visceral repulsion. This is what we do by default. We then seek out justification for these feelings in anecdotes, books, articles, documentaries, etc.

Making any sort of correct judgment about the impact of media technologies on society necessitates that we recognize the ways in which we respond emotionally to the sight of other people's media use. By the looks of the most popular opinions on smartphone and laptop use, many of us have yet to take that step.


Thursday, May 19, 2016

Egg-manning: Arguing with the fringes

I've been thinking more about writer Dan Brooks's post about the death of the Straw Man and the rise of the Egg Man. I'm not in love with Brooks's name for this "egg-manning" phenomenon (for one thing, googling it currently yields pictures of Peyton Manning being egged, which I'm not necessarily opposed to, but is indicative of the requirement of lexical singularity in the age of Google). So far, I've only been able to find one other use of it online (by blogger Tim Hall). But as I read more opinion pieces on both mainstream news websites like the New York Times and encounter news via social media (on Facebook and Twitter, where the person sharing the opinion piece is, in effect, endorsing the argument), I keep returning to this concept.

While straw-manning involves making up an imaginary person who holds a view opposed to your own just so that you can refute it, egg-manning involves finding a real person advancing a real view on social media just so that you can refute it. Finding an actual person essentially justifies the necessity of making your argument. There really are people who must be argued against!

Straw-manning involved assuming that someone out there disagrees with your argument. Egg-manning does not assume this, but instead makes another, typically unstated, assumption. The assumption that egg-manners make is that the person making the argument opposed to their own is part of a large, influential group of people, and that the expression of the argument is part of a larger trend. Rarely is any particular opinion held by only one person; you can search for a hashtag or a term or through various interconnected bits of the blogosphere or social news networks and usually find hundreds or maybe thousands of examples of the argument which you wish to argue against.

The next step in the assumption is the implicit or explicit linking of these hundreds or thousands of people actually stating the argument online with a larger mainstream group that, while not actually stating the argument, has stated or acted in such ways that makes it clear that they believe in some of the same things as the group that actually states the argument online. Most often in the U.S., these large mainstream groups are liberals/Democrats or conservatives/Republicans. Sometimes, they're smaller and/or more amorphous groups, like racists, sexists, Tea-Partiers, Social Justice Warriors, fraternity brothers, hipsters, or many other groups that are partially defined by their stated beliefs and actions. The largest, most amorphous group to be argued against in the U.S. right now is The Establishment. Dissimilarities between ideas held by the small group actually stating the argument online and the large group not stating the argument are ignored in favor of whatever they have in common, which, to the egg-manner, represents a coherent ideological framework.

Even if that group of people isn't large right now, it could become large in the future if its ideas are not argued against. Often times, an example from history is provided to show how quickly ideas can spread if they are not forcefully countered with another argument, a time when silence sealed the fate of a people. In the past, small, vocal groups of people got large less-vocal groups to go along with them. This is intended to make it clear that arguing against a dangerous idea is not so much an act of participating in civil discourse (and thus not subject to the informal rules of civil discourse), but a kind of duty, and that to fail to argue against it would constitute negligence of one's duty.

Sometimes, the egg-manners are correct, as many people who assume things tend to be every now and then: a dangerous view that once was fringe becomes mainstream. It can happen quickly with viral spread of ideas through social media. But other times, the egg-manners are incorrect. The view never becomes one held by more than an un-influential, fringe group online. Of course, if you are able to indefinitely postpone the point at which you believe the fringe idea will become mainstream, you can never be proven wrong, but if we were to require that the aforementioned assumptions be tested (which requires setting a finite time frame in which the fringe idea would become mainstream), I think that many assumptions like this would turn out not to be true.

Then there is the possibility of a backfiring effect: that by arguing against the fringe idea, egg-manners give legitimacy to it, thus bringing about its popularity. Not only are egg-manners raising the profile of ideas with which they argue; they are also providing more examples of opposite arguments for the egg-manners on the other side of argument to use in their egg-manning. There's likely an emotional component to the way in which egg-manning fuels that which it seeks to fight: anger from one side fuels anger from the other.

It's unclear whether egg-manners consider their arguments to be "arguments" in the traditional sense - attempts to convince another person of the truth and/or to provide support for a like-minded person who feels alone - or whether they consider them to be acts of self-expression. If it is the former, the egg-manner should care about the impact of the argument. But if it is the latter, being wrong and ineffective may not matter. It's a rhetorical maneuver that continues to interest me. If only it had a better name.



Monday, May 02, 2016

Prince and the art of making yourself scarce

Among the surprisingly strong feelings I experienced after Prince's passing (especially while watching some of the relatively-high-quality concert videos people have been posting) was a kind of shame at having taken so long to recognize how good he was. Part of the reason the strength of the feelings have been so surprising is that I was never a big Prince fan. This wasn't the normal level of regret one feels when an artist dies that you may have taken for granted. This was a sense that I might have been a much bigger fan of Prince had I listened to more of his music. But the answer to the question of why I didn't listen to more of his music, I think, has much to do with the unique way Prince produced music and managed access to it.

On the one hand, he produced a huge amount of material. This may have diluted his "brand." I don't mean "brand" in the commercial/corporate context, so maybe that's not the right word. I just mean that when I thought of Prince, I thought of all the music I hadn't listened to. It's subjective whether the material was consistently good (and it is rare for any artist to produce a lot of consistently good material; far easier to produce a few ground-breaking albums, call it quits, and leave the audience wanting more), but the mere fact that there was so much of it raises this question: where do you begin? The choice to listen to Prince wasn't whether or not to spend $10 - $40 on a few albums (as the choice might be to buy all of Guns n' Roses' oeuvre). There were hundreds of songs, and while there is some consensus that his earlier albums were among his best, there were plenty of gems scattered throughout the rest of his career. It would seem random to buy one late-era Prince album and ignore the others, but buying them all would cost a lot.

This leads us to the unusual way in which he regulated access to the music. Ever since 1999, the year Napster went mainstream, musical artists have had to balance the added exposure that comes with free distribution with the fact that giving things away for free is no way to make a living. Streaming music like Spotify and Youtube's Vevo channel are kinds of compromises that allow artists to make some money (arguably too little) while music consumers are able to listen to whatever they want either for free with advertising or for a small subscription fee. The more artists transfer over to that model, the more appealing the service like Spotify becomes. From the perspective of the music consumer, you could keep paying your monthly fee to Spotify and get to listen to what most new artists produce, or you could pay 10 bucks to listen to one album by one artist. The shift in value was incremental and difficult to notice - it wasn't like a single label or artist deciding to provide their music in a certain way tipped the scale. But at some point, the scale tipped. Music is as valuable to individuals and society as it ever was, but the value of individual artists or songs shifted when we started consuming music in different ways.

Thinking about how I missed the boat on Prince until now makes me think about how we recognize artistic excellence in today's world. I get a sense that there is a kind of skepticism about it now, a desire to ask, "how good could he possibly be? Wouldn't more people have been listening to him and making a bigger fuss about his music over the past several decades?" The question of who gets celebrated as a musical genius isn't just a question of subjective judgment of talent (though it is that, too). It's a question of how output and access influence our estimates of excellence. If something is even moderately awesome, we all hear about it, see video of it, and post it on social media right away. Encountering some of the videos of Prince's performances is so jarring because we've become accustomed to a world without secrets (and that includes secret genius). It's one thing to unearth an under-appreciated artist or work. This practice has become commonplace online: a sophisticated content curator spends hours digging through the detritus of YouTube so that we don't have to, and presents us with an overlooked or forgotten work of genius.

Prince's work was different. It was sitting there in plain sight; it just happened to be behind a paywall. That wall came down (at least temporarily) in the wake of his death, and it really did feel like something brilliant that had always been in your immediate vicinity had been suddenly revealed, rather than feeling as though a curator dug up a hidden gem.

I also get the sense while watching videos of the unbelievable live performances that Prince wasn't made for the world of sampling and covering, of copying and pasting, of virality and memes, not only because of what he produced and how he managed access to it, but also because of his performance of self. A large part of the appeal with Prince is the performer, some un-copyable charisma that he had. Whereas a Beatles or Metallica melody might sound interesting if interpolated by another artist, a cover of a Prince song would just make whomever was covering it look positively un-charismatic by comparison. Access to Prince's live performances is (or at least was) limited to begin with (similarly, this is a reason why Hamilton can still be a phenomenon in the age of digitally reproducible art). It's true that when the artist dies, the recordings (including the recordings of live performances) will live on, but the recordings are once-removed from the actual ecstatic experience of being there, with the performer, with the crowd. So watching them also makes me sad. Once the performer dies, the party's over.


Tuesday, March 15, 2016

The Truth, Online

Reading Jill Lepore's review of Michael Patrick Lynch's new book, The Internet of Us, reminded me to write something on the topic of truth. I haven't read Lynch's book yet, but even the sub-title ("Knowing more and understanding less in the age of big data") gave me that all-too-familiar twinge of jealousy, of feeling as though someone had written about an idea that had been gestating in my mind for years before I had the chance to write about it myself, of being scooped. So, during this brief lacuna between the time at which I learned of this book's existence and the time at which I actually read it, let me tell you what I think it should be about, given that title. That is: how is it possible that the Internet helps us to know more and to understand less? Or, to take Lepore's tack, what is the relationship between the Internet/Big Data and the truth/reality?

At this stage, I have only a semi-organized collection of ideas on the topic. I'll base each idea around a question.

To what extent has truth (or reality) become subjective in the age of the Internet/Big Data? 

I think we vastly overestimate the extent to which the Internet has fragmented our sense of truth and/or reality. And by "we," I mean most people who think about the Internet, not just scholars or experts. My sense is that it is a commonly held belief that the Internet allows people access to many versions of the truth, and also that groups of people subscribe to the versions that fit their worldviews. This assumption is at the core of the "filter bubble" argument and undergirds the assertion that the Internet is driving fragmentation of polarization of societies.

I contend that most people agree on the truth or reality of most things, but that we tend not to notice the things we agree on and instead focus on the things on which we do not agree. Imagine that we designed a quiz about 100 randomly selected facets of reality. We don't cherrypick controversial topics. It could be something as pedestrian as: "what color is they sky?" ; "if I drop an object, will it fall to the ground, fly into the sky, or hover in the air" ; "2 + 2 = ?" I'd imagine that people would provide very similar answers to almost all of these questions, regardless of how much time they spend on the Internet. Even when we do not explicitly state that we agree on something, we act as though we believe a certain thing that other people believe as well. We all behave as if we agree on the solidity of the ground on which we walk, the color of the lines on the roadways and what they mean, and thousands of other aspects of reality in everyday life.

The idea that reality or truth is becoming entirely subjective, fragmented, or polarized is likely the result of us becoming highly focused on the aspects on which we do not agree. That focus, in turn, is likely the result of us learning about the things on which we do not agree (that is, of us being exposed to people who perceive a handful of aspects of reality in a very different way than we perceive them) and of truth/reality relating to these handful of aspects genuinely becoming more fragmented. Certainly, it is alarming to think about what society would look like if we literally could not agree on anything, either explicitly or implicitly; so, there is understandable alarm about the trend toward subjectivity, regardless of how small and overestimated the trend may be.

So, I'm not saying that that truth/reality isn't becoming more fragmented; I'm only saying that part of it is becoming that way, and that we tend to ignore the parts that are not.

It's also worth considering the way in which the Internet has unified people in terms of what they believe truth/reality to be. If we look at societies around the globe, many don't agree on aspects of world history, how things work, etc. Some of those people gained access to the Internet and then began to believe in a reality that many others around the globe believe in: that certain things happened in the past, that certain things work in certain ways. Reality and truth were never unified to begin with. The Internet has likely fragmented some aspects of reality and the truth for some, but it has also likely unified other aspects for others.

Maybe I'm just being pedantic or nit-picky, but I think any conversation about the effects of the Internet on our ability to perceive a shared truth/reality should start with an explicit acknowledgment that when people say that society's notion of truth/reality is fragmented, they actually mean that a small (but important) corner of our notion of truth/reality is fragmented. Aside from considering the net effects of the Internet on reality (has it fragmented more than it unified?), we might also consider this question:

What types of things do we agree on?

Are there any defining characteristics of the aspects of truth/reality on which we don't agree? When I try to think of these things (things like abortion, gun rights, affirmative action, racism, economic philosophy, immigration policy, climate change, evolution, the existence of god), the word "controversial" comes to mind, but identifying this category of things on which we don't agree as "controversial" is tautological: they're controversial because we don't agree on them; the controversy exists because we can't agree.

So how about this rule of thumb: we tend to agree on simple facts more than we agree on complex ones. When I think of the heated political discourse in the United States at this time, I think about passionate disagreements about economic policy (what policy will result in the greatest benefit for all?), immigration (ditto), gun rights (do the benefits of allowing more people to carry guns [e.g., preventing tyrannical government subjugation, preventing other people with guns from killing more people] outweigh the drawbacks (e.g., increased likelihood of accidents; increased suicide rates], and abortion (at what point in the gestational process does human life begin?). These are not simple issues, though many talk about them as if the answers to the questions associated with each issue were self-evident.

I can think of a few reasons why truth/reality around these issues is fragmenting. One is, essentially, the filter bubble problem: the Internet gives us greater access to other people, arguments, facts, and data that can all be used by the motivated individual as evidence that they are on the right side of the truth. In my research methods class, I talk about how the Internet has supplied us with vast amounts of data and anecdotes, and that both are commonly misused to support erroneous claims. One of these days, I'll get around to putting that class lecture online, but the basic gist of it is that unless you approach evidence with skepticism, with the willingness to reach a conclusion that contradicts the one you set out to find, you're doing it wrong. Dan Brooks has a terrific blog post about how Twitter increases our access to "straw men." So, not only does the Internet provide us with access to seemingly objective evidence that we are right; it also provides an infinite supply of straw men with which to argue.

In these aforementioned cases in which we disagree about complex issues, we tend not to disagree about whether or not something actually happened, whether an anecdote is actually true or whether data is or is not fabricated. Most disagreements stem from the omission of relevant true information or the inclusion of irrelevant true information. We don't really attack arguments for these sins; we tend not to even notice them, and instead talk past each other, grasping at more and more anecdotes and data (of which there will be an endless supply) that support our views.

If it is the complex issues on which we cannot agree, then perhaps the trend toward disagreement is a function of the increasing complexity and interdependency of modern societies. Take the economy. Many voters will vote for an elected official based on whether or not they believe that the policies implemented by that official will produce a robust economy. But when you stop and think about how complex the current global economy is, it is baffling how anyone could be certain that his or her policies would result in particular outcomes. Similarly, it is difficult to know what the long-term outcomes of bank regulations might be, or military interventionalism (or lack thereof). Outcomes related to each issue involve the thoughts, feelings, and behaviors of billions of people, and while the situations we currently face and those we face in the future resemble situations we've faced in the past (or situations that economists, psychologists, or other "ists" could simulate), they differ in many others that are difficult to predict (that's simply the nature of outcomes that involve billions of people over long periods of time). And yet we act with such certainty when we debate such topics! Why is that? This leads to my last question:

Why can't we arrive at a shared truth about these few-but-important topics?

First, there is the problem of falsifiability. Claims relating to these topics typically involve an outcome that can be deferred endlessly. For example, one might believe that capitalism will result in an inevitable worker revolution. If the revolution hasn't occurred yet, that is not evidence that it will never occur; only evidence that it hasn't occurred yet. There's also the problem of isolating variables. Perhaps you believe that something will come to pass at a certain time and then it doesn't, and you ascribe the fact that it doesn't to a particular cause, but unless you've made some effort to isolate the variable, you can't rule out the possibility that the cause you identified actually had nothing to do with the outcome.

There are falsifiable ways of pursuing answers to questions relating to these topics. And despite all the hand-wringing about the fragmentation of truth/reality on these topics, there are also plenty of folks interested in the honest pursuit of these answers; answers that, despite the growing complexity of the object of study (i.e., human behavior on a mass scale), are getting a bit easier to find with the growing amounts of observations to which we have access via the Internet.

The other problem is the lack of incentive to arrive at the truth. Often times, we get immediate payoff for supporting a claim that isn't true, in the form of positive affect (e.g., righteous anger, in contrast to the feeling of existential doubt that often comes with admitting you're wrong) and staying on good terms with those around you (admitting you're wrong is often inseparable from admitting that your friends, or family, or the vast majority of your race or gender or nationality are wrong). So, there are powerful incentives (affective and social) to arrive at certain conclusions regardless of whether or not these are in line with truth/reality. In contrast, the incentives to be right about such things seem diffuse. We would benefit as a society and a species if we all right about everything, right?

I suppose some would argue that total agreement would be bad, that some diversity of opinions would be better. But we don't tolerate diversity of opinion on whether or not the law of gravity exists, or whether 2+2 = 4. Why would we tolerate it in the context of economic policy? Is it just because of how complex economies are, and that to think you have the right answer is folly? (I suppose that's a whole other blog entry right there, isn't it). But certainly, even if you believe that, you'd agree that some ideas about economies are closer to or further from the truth and reality of economies. So, perhaps what I'm saying is that if we lived in a society where "less right" ideas were jettisoned in favor of "more right" ideas, we would all benefit greatly, but that the benefits would only come if a large number of us acted on a shared notion of the truth and that the benefit would be spread out among many (hence, "diffuse").

But what if there were an immediate incentive to be right about these complex issues, something to counter the immediate affective and social payoffs of being stubborn and "truth agnostic?" I love the idea of prediction markets, which essentially attach a monetary incentive to predictions about, well, anything. You could make a claim about economic policy, immigration policy, terrorism policy, etc., and if you were wrong, you would lose money.

Imagine you're a sports fan who loves a particular team. You have a strong emotional and social incentive to bet on your team. But if your team keeps losing and you keep betting on your favorite team, you're going to keep losing money. If you had to participate in a betting market, you'd learn pretty quickly how to arrive at more accurate predictions. You would learn how to divide your "passionate fan" self from your betting self. And if you compare the aggregate predictions of passionate fans to the aggregate predictions of bettors, I'd imagine that the latter would be far more accurate. I would assume it would work more or less the same way with other kinds of predictions. People would still feel strongly about issues and still be surrounded by people who gave them a strong incentive to believe incomplete truths or distorted realities. But they would have an incentive to cultivate alternate selves who made claims more in tune with a shared reality.

Of course, not all issues lend themselves to being turned into bets (how would one bet on whether or not life begins after the first trimester?), but it still seems like, at least, a step in the right direction, and gives me hope for how we can understand the truth and our relationship to it in the Internet age, perhaps even better than we did before.

Tuesday, January 19, 2016

Perception Becoming Reality: The Effects of Framing Polls and Early Primary Election Results on Perceived Electability and Voting Behavior

National polls (and, in the coming weeks, the results of early primaries) present potentially misleading information about presidential primary candidates' chances of winning the eventual nomination. The actual likelihood depends on a several facets of the primary electoral process: how many delegates are assigned by the voters of each state; whether or not a state is winner-take-all; "triggers" and "thresholds" to allocate delegates to particular candidates; when a given state votes during the process. Add to that the effect of whether or not other candidates drop out of the race and who those voters then decide to vote for.

A lot of this can, and has, been modeled. You can model how many people would vote for each candidate in each state (even if there isn't accurate polling data in some states) based on what you know about the relationship between, say, education and likelihood to support a particular candidate. You can know who each voter's second, third, or fourth choice would likely be (i.e., how things will shake out when candidates start dropping out of the race). You can know what the rules are for delegate allocation in each state and how many delegates are in each state. When you take all of this into account, at least for the Republican candidates right now, you end up with a disjuncture between what the polls and what the early primary results will likely be (Trump and Cruz well ahead of Rubio) and who would actually get the most delegates if the primaries were to all be conducted today (Rubio, probably).

The crazy thing about this is that the emphasis on current national polls and early primary results in the  media (which, as far as I'm concerned, is a misleading picture of how people would vote if the primaries were all held today) might change later primary voters' perceptions of the electability of their favored candidate, causing them to abandon that candidate and switch to another one.

Surely, there will be some people voting in later polls who will "stand their ground" and still vote for their favored candidates, regardless of what national polls or early primary elections say. Also, there are many reasons why those voting in later primaries may change their opinion over the coming months: for example they may get more information about the candidates, or their favored candidate may say or do something they don't like. But I think at least one possible cause of switching candidates has to do with perceived electability, and that perceived electability could be based on the misleading information from national polls and early primary results.

So then, how will the misleading information sway voters?

My guess is that Trump and Sanders (and possibly Cruz) will keep referring to the polls and the early primary results, claiming it to be evidence of their electability. They would do this in hopes of a herding effect. For Republicans, people in late-voting states who would've voted for Rubio will see supporting Rubio as supporting a likely loser. Spending time and energy supporting him would be a waste, and possibly embarrassing. This would cause them to abandon Rubio and either fall in line with the herd developing around Trump and/or Cruz (likely due to an "anyone but Hillary" sentiment) or sit out the primary vote altogether. For the Democrats, Hillary supporters residing in late-voting states who were on the fence and perhaps supported Hillary only because they thought Bernie didn't have a shot would think that Bernie did have a shot, and switch over to Bernie.

However, this strategy of emphasizing national polls and early primaries might backfire for Trump. He'll keep saying he's winning and will successfully convince people he's likely to win the nomination, but this might freak other voters out ("oh my god, he could actually win!"). This might cause people who would have sat on the sidelines to vote against him. It might cause wealthy donors to throw more money at Cruz or Rubio. It might cause other candidates to drop out sooner and endorse Cruz or Rubio. Call this the "panic mode" reaction to the perception that Trump could win.

There are, of course, many X factors that could swing the election: the economy tanks, someone says something stupid, scandals, terrorist attacks, etc. But I think one factor is whether people think national polls and early primary results predict eventual electability. And whether people think this depends on what they hear both from the candidates themselves and from the news.

The news will likely present a "horse-race" framing of the election, not because they want Trump or Cruz or Sanders to win, but because they want a close race, because it's a simpler story, and because this will boost ratings. There is a chance that some news outlets (I'm looking at you, NPR and NYTimes) will try to convey the complex relationship between staggered primaries with various delegate allocation rules and public opinion. I think the likelihood of any of the above scenarios playing out depends on whether news outlets use the simple, misleading frame or the more nuanced one.


Saturday, January 02, 2016

The Awkwardness of Walking a High School Hallway (or, Digital Tribes: Gamers, Socialites, and Information Seekers)

This thought came to me while reading this New York Times article on app makers' attempts to understand how teens use smartphones and what they want out of the experience. In particular, I was struck by this sentence: "And when your phone is the default security blanket for enduring the awkwardness of walking a high school hallway, it feels nice to have a bunch of digital hellos ready with a swipe."

I thought of my own experience in high school. Indeed, it was awkward. I didn't have a phone as a security blanket. I suppose I just thought about the things that mattered to me as a way of escaping the awkwardness. I thought about the video games I'd play when I got home, or the movies or music I loved. Social media didn't exist. Maybe I thought about hanging out with my friends the following weekend.

Also while reading this sentence, I thought of my nephew (age 9) and niece (age 5). They're both too young for social media and smartphones, but I started thinking about what they'd be like when they are old enough to use these things. My nephew is already enamored with video games, in particular Minecraft. It seems unlikely that he'll be a heavy user of social media, and very likely that he'll spend a lot of time playing video games. My niece plays video games, and I honestly am not sure whether she'll stay interested in video games and/or develop an intense interest in social media, like many middle school and high school girls.

But as I read this article, and as I imagined how my nephew and niece would use media when they get to high school, a picture started to emerge in my head, a picture of at least two, maybe three, relatively distinct "tribes". One tribe spent most of their screen time using social media like Instagram or Snapchat. Another tribe spent most of their screen time playing video games. Of course, there would be some overlap: the gamers wouldn't totally forsake social media, and those who spent a lot of time with social media would also play some games. But they would differ in terms of how these media experiences fulfilled some fundamental needs or desires, how digital media provided a kind of default security blanket for them during the awkward teenage years.

For the gamers, video games would deliver a sense of challenge and accomplishment, and sometimes a sense of esteem (others see what you've accomplished and admire you). They also would provide camaraderie via the community of gamers.

For the social media users (let's call them "socialites"), social media would deliver a sense of social support and esteem, evidence that people are paying attention to you, that people like you, that you're not alone.

And perhaps there would be a third group: information seekers/entertainment consumers - people who use media primarily to consume rather than interact; consume news, consume educational material, consume movies, music, etc. I think I was one of these types of people in high school, and I think they still exist in high schools. Some kids aren't that into gaming or social media. They love movies, music, books, etc.

These are distinct groups driven by distinct desires. This brings me back to the Uses & Gratifications theory, a theory that I'm not too fond of (because I don't think people are very good at reflecting on why they use media), but might be of some use to help determine what the positive or negative effects of media might be.

So what? Why do these categories matter?

Well, for light-to-moderate users, all of these types of media use might help to keep young people happy and engaged with the world around them, give them a sense of belonging and fulfillment. The particular kind of media use that provides that sense of belonging and fulfillment won't be the same for everyone.

What about heavy media use? Well, heavy use is probably bad for all groups, but bad in different ways. For those in the social tribe, heavy use would be associated with a kind of fragile ego and need for validation from others, and preoccupation with this validation. For gamers, heavy use would be associated with not caring about accomplishments in the real, non-game world (i.e., not caring about grades, not caring about social connections with real-world peers, not caring about one's health, etc.), a kind of disappearing into the game world. For information seekers, heavy use might be associated with a kind of "filter bubble" problem: they get further and further into a particular view of the world without being forced to see messages from other perspectives or without interacting with people who, inevitably, will hold at least slightly different opinions.

If you just measure "internet use" or "smartphone use" as they relate to these outcomes, you might not find any effects, simply because the lack of effects in the other two groups "washes out" the significant effect in a single group. That doesn't mean the effects aren't there. By differentiating among these tribes (not necessarily by asking young people how they identify, but by measuring their actual use of video games, social media, and information/entertainment consumption), we would be able to see these different effects.

I'm usually quite skeptical of metaphors used to describe media technologies. Such metaphors tend to highlight the ways in which media technologies are similar to something else while ignoring all the ways in which it is not, and they seem chosen chiefly to support the pre-existing beliefs of the metaphor user. Do you think that a new media technology is harmful? Liken it to cigarettes or crack cocaine. Think that it's benign, or even helpful? Liken it to chess or painting or family.

But the security blanket metaphor seems a bit less...deterministic than these other metaphors. Of course, if one were to take it literally, it does have a negative connotation: the image of teenagers clinging to blankies evokes a kind of pathological arrested development, kind of like a pacifier. But what I like about the metaphor is that as long as you don't take it too literally, it helps you think about what young media users get out of the experience - security, comfort - and, at least for me, it doesn't dictate that this gratification come from a particular type of media experience.