Wednesday, December 25, 2013

What You Search For Is All There Is

I'm still slowly making my way through Thinking, Fast and Slow, and every now and then, ideas from the book pop into my head when considering some type of media use. Today, I was thinking about searching for information online and how the bias of "what you see is all there is" (WYSIATI) might be applied to it (WYSFIATI). Essentially, people have trouble factoring in the effects of relevant information to which they don't have access but that still affects outcomes of interest. People assume, wrongly, that what they see (i.e., the information to which they have access) is all there is (i.e., all the information that is relevant to the outcome of interest).

The types and scope of information to which we had access was largely dictated by physical proximity, our social circles, books, newspapers, TV shows, movies. These sources gave us an incomplete picture of reality. There's nothing wrong with this per se, as long as the individual knows the extent to which and the way in which their information is incomplete. In Thinking, Fast and Slow, Kahneman cites numerous experiments in which people are asked to make decisions based on information which they are told has a particular likelihood (e.g., 50%) of being true. But the more common flaw in decision making seems not to be the extent to which people trust a source, but the extent to which they assume that the sources are supplying them with complete relevant information.

Media users often employ some kind of skepticism toward the information with which they are presented. For example, viewers often discount what they see on Reality TV: they assume that the behavior of people on such shows is authentic in some sense but not in others.

But what about searching for information? You start off with a question about an upcoming decision. Sometimes, you find a relatively concrete, straightforward answer. Other times, you're searching for something harder to answer: whether or not to have children after 40, say, or whether or not to vaccinate your children, or whether or not to travel to the Middle East, or whether o not that lump on your shoulder is a tumor (in my case, thankfully, it was not).

Again, it's important to say that we never had perfect information to help us make these decisions. As a culture, we probably weren't hip to this at first, but in the recent past, and certainly once the internet came along, skepticism (or rather, cynicism) about mainstream media went, well, mainstream. It is true that our personal social networks, newspapers, or books were incomplete and biased sources of information. I think that the information we acquire through online search is incomplete and biased in different ways, but I wonder if the very act of searching increases our erroneous belief that we're getting complete, or near complete, information. The act of searching, of picking your own source, is likely to make you think that you're acting independently.

But search really magnifies certain biases in ways that mainstream media did not. Search results are, of course, not all that there is. They are based on what others click on (and a "click" is as much an indication of immediate curiosity as it is of the veracity of the information) and based on what we've clicked on in the past. So, confirmation bias probably influences search results. But the act of searching from among many (sometimes hundreds or even thousands, if you're especially diligent) sources makes you feel like a careful information consumer, someone who's not simply being spoon-fed information by those interested in making a profit.

Perhaps mainstream media was always interested in making a profit. They also were (and continue to be) interested in maintaining the status quo. It's been pointed out many times how this maintenance of the status quo was a bad thing, how it keeps populations from questioning despotic regimes, antiquated laws, bigotry, etc. But getting free of status-quo-enforcing sources of information has it's potential downsides. We may be disappearing down information rabbit holes based on shock value and pre-existing notions of how the world is.

But the point I wanted to make here relates to the certainty we have when we search that what we find is all there is. It seems to me to be counterproductive to refer to this as the illusion of agency or the illusion of choice, as one might be tempted to do. All decisions are made from incomplete information menus that are influenced by others who do not necessarily have our best interests in mind. But you can be more or less aware of the degree to which the information presented in those menus is complete and how it may be biased. The very act of searching makes it a bit harder to see these things clearly.

The object of media literacy is not to eliminate bias in one's media diet or finally obtain complete information, but to increase awareness of bias and incompleteness.

Post script: what I've really been describing is "What You Find Is All There Is" (WYFIATI). "What You Search For Is All There Is" really describes the sense that the world is defined by what you're curios about or interested in, which is a separate but related cognitive bias.

Tuesday, December 10, 2013

Hoaxed

A couple of viral stories, or memes, circulating through the information ecosystem over the past month got me thinking about hoaxes and viral media. They're both stories about interactions between strangers. They have the elements of any good story: conflict, a moral element, a character with whom you can identify. They both incorporate pictures of written documents which have the effect of serving as proof of the events (they make the stories slightly more believable than if there were no pictures at all). One took place on an airplane. The other took place in a restaurant. They both turned out to be hoaxes. Many people, including major news outlets, believed them at first.

My initial thoughts on hoaxes and new media revolved around the premise that hoaxes happen when any information dissemination technology is new. They expose the fact that we have placed too much trust in the new source of information. Think of War of the Worlds Halloween broadcast (radio), the quiz show scandals of the 50's (TV), Lonelygirl15 (online video), Manti T'eo (online dating). Whether intended or not, hoaxes help audiences or users develop a proper skepticism toward information from the new technology.

But this latest rash of hoaxes seem like something else. This just seems to be par for the course in the era of decentralized, virally spreading information. It's easy to create and spread a hoax online and an audience can only be so skeptical. This kind of hoaxing and unmasking happens a lot on Reddit. People pretend to have cancer, or make up a story around a photo, e.g., "my 5-year-old daughter made this" in order to garner greater attention or sympathy. As good as digital media is at spreading lies, it seems equally equipped to unmask them. Crowd-sourced Sherlocks put pieces of evidence together to determine the truth of any of these stories.

The situation raises a couple of questions:

Can an online information consumer really spend the energy being that skeptical about all the information he/she encounters online, all of the stories like this one? The signal-to-noise ratio isn't quite so low that people will stop believing certain sources, but that could happen. But with a site like Gawker, I'm not convinced the average reader, after having read a fake story on the site, would be any less likely to believe or go to the site for information. But perhaps that is because Gawker readers seek a certain kind of information, which leads me to my next question.

Does it really matter if stories like these turn out to be untrue? Does that erase all of the meaning? It matters if a story about an attack on the White House turns out to be fake. But these morality play stories go viral in part because they spark a conversation about morals and behavior. The conversation and the thoughts shared about the topic still seem valid to me. Perhaps there is news and there is gossip. Both serve a function, and with the latter, we're more tolerant of the occasional falsehood.

----------

After listening to an excellent podcast in which Chuck Klosterman, Bill Simmons, and Chris Connelly discussed what we know about the Kennedy assassination 50 years after it occurred, I had another question related to truth and lies in the digital age: are we, in general, developing a more complete understanding of the world around us, or are we somehow moving further away from that? We have better tools for recording, seeing, hearing, and analyzing. We should be better at answering questions about what happened. But those tools developed at the same time as the hoax-spreading machine known as the internet.

There is some theoretically knowable answer to the question of who killed Kennedy and why. But information and knowledge do not move in tandem. William Gibson once said that the future is here, but that it was just unevenly distributed. Maybe knowledge of the world around us is like this. Those with a certain orientation to the technology are moving closer to this knowledge while others move further away.

Post script: There was an interesting article on the Reuters blog about this topic. It features quotes from Jonah Perretti (BuzzFeed) and Nick Denton (Gawker) and is definitely worth reading. Here's a terrific quote from it: "the reasons that people share basically have nothing to do with whether or not the thing being shared is true." The article acknowledges the two types of information circulating - news and info that let's people feeling "fleeting instances of comfort or joy" that are more difficult to verify. But even if you're dealing in the latter, if you repeatedly pass on stories that draw some of their appeal from being real (and let's face it: these little gossipy stories make for sub-par fiction) that turn out to be untrue, you become like the friend who's always telling tall tales and loses all credibility. Not as bad as the news organization that loses credibility, but still, I think people will seek out reliably true "gossip"-type news, shunning sources that get repeatedly duped.

Thursday, November 14, 2013

The Circulation of Trauma

Teaching a class on images of war has been an education for me. It is not my area of expertise, which perhaps made it all the more fascinating. I've had the opportunity to meet many wonderful guest speakers, think about things I don't normally think about, and make new connections when thinking about the subjects that I spend most of my time thinking about.

In our most recent class, we screened the powerful documentary Under Fire, which concerned post-traumatic stress disorder among war journalists. After the screening, we spoke with one of its producers, neuropsychiatrist Dr. Anthony Feinstein, and one of the journalists depicted in the film, Pulitzer Prize winner Paul Watson. Both were as candid and thoughtful as people who encounter trauma could be.

Watson's experiences as a war journalist are the stuff of classic tragedy. His telling of his experience being the photographer who helped change American military policy through his photos of dead American soldiers in Mogadishu in 1993 is the emotional climax of the documentary. It is also the basis of a series of poems, a play, and an opera that will be performed in 2014 in NYC. I've always felt a bit queasy about turning one person's suffering into another person's art/entertainment. Maybe it's the commodification that bugs me, the building of a career on the suffering of others. Even if the message affects others in some positive way, that other exploitative layer is there. Watson's pain also seems incredibly personal and private. In some respects, the troubles with the use of Watson's emotional and mental trauma for the purposes of art mirrors the trouble at the heart of his story: the depiction of death and suffering as a kind of desecration, no matter how noble the intentions. One idea I was left to mull over after that evening was how we can discuss trauma without it overwhelming us, or infecting us.

This relates to my musings on the circulation of ideas and stories about personal suffering online. When do our consumption of these ideas and stories and the conversations they give rise to provide us or the sufferer with solace? When do they prompt us to dwell on the suffering? When do they help us work through the suffering? These are "difficult" topics to think about, to speak about, to hear about. Watson had the feeling that there was something fundamentally wrong about recording and distributing these horrible images. Another part of him knew that they were important for the world to see.

Watson knew that some images he saw would overwhelm an audience. When he encountered piles of dead children, he didn't photograph them. He depicted the horror obliquely, showing the children's schoolbooks scattered on the ground. That ability to take in some overwhelming horror of real life and to retain it's essence through storytelling is, I'm understanding, an invaluable part of being a reporter, an artist, and maybe being a compassionate, honest human being.

Sunday, October 13, 2013

Mobile media revealing our selves to ourselves

Mobile phones have many, many purposes: entertainment, art, communication, education. One of the more controversial applications of this technology is using it for monitoring. When certain people have the ability to covertly monitor others while not being monitored themselves, there is a power imbalance. It is possible for those doing the monitoring to judge and punish those who they catch doing something bad when, in fact, this is something that they do themselves but no one is able to observe them doing it. I'm not sure this actually happens all that often or will be as likely to happen in the future as most people believe, but for the time being, let's assume this concern about mobile media as monitoring device is a valid one.

But what about self-monitoring? The possibility of using mobile technology for self-monitoring and self-feedback has not yet been fully realized, and I think it may help people overcome two significant obstacles to behavior change.

The first obstacle is being aware of the patterns in your own behavior: the environmental cues that cause you to unconsciously respond in a way that isn't in your long-term interests, the moods and thoughts that precede worse moods and thoughts. It will be tricky to use mobile tech to reveal these patterns without being too obtrusive. I've tried out some experience sampling technology on my phone and its hard to even get it to work right, and when it does work right, it may just be too much of a nuisance to put up with. It feels like one more thing you have to do, like a diet, and almost all diets fail. But its conceivable that you could design a less intrusive way of tracking thoughts, feelings, and behavior throughout the day.

The second is one I've been thinking about a lot: people do not like to be told what is good for them by others. They may be a bit more accepting of such advice if its coming from a trusted expert (say, a medical doctor), but even then, it doesn't feel good to be told what to do. People will start to look for reasons to doubt the expertise of the advice giver. If we give people the tools to make connections for themselves and to use that new knowledge to alter their choice environment, there is no external force telling them what to do. If anything, they are telling their selves, their future selves, what to do.

We all do this already. We resolve to do things in the future that are in our best long-term interests but then fail to do so in the face of temptation or distraction. Mobile media, because it is with us at all times in all contexts, can be a tool with which we cope with temptation and distraction. Ideally, it will not proscribe particular ways of being, but will merely be a tool for individuals to closely observe and then structure their lives.

Modern existence necessitates being part of an increasingly complex (i.e., hard to understand) set of interactions with an increasing number of people. It is difficult to know why you feel the way you feel, or do the things you do. Designing an unobtrusive, secure self-monitoring application and using it in tandem with some choice-limiting technology is a way to exist in such a society. Potentially, mobile tech is a layer of technology between all those aforementioned applications - entertainment, art, communication, education - and the self who is equipped with the older "technologies" of the brain, eyes, ears, and nose.

Saturday, September 21, 2013

Choose your metaphors wisely

The term "digital detox" is now in the Oxford English Dictionary. This suggests a certain cultural awareness of the concept of media overuse which, to someone studying self-control and media use, is heartening. But I wonder about the choice of words.

"Detox"'s meaning, recently, related to food consumption, or the lack thereof. When you traded in the fast food for celery and refrained from consuming anything but tea, you were "detoxing", or "cleansing". But, of course, the term detox originally achieved notoriety when it was used in reference to addictive drugs like heroin. When I think of the term, I still think of it in terms of drugs.

Even beyond that single term, we are apt to think of new experiences in terms of older, more familiar ones. More and more, I hear people talk of Facebook or Candy Crush "addiction" which, again, conjures thoughts of hopeless alcoholics or strung-out meth-heads. Perhaps, in our effort to blame anyone but ourselves for the fact that we're unable to eschew immediately gratifying options for activities that will help us achieve our long-term goals, we want to trump up the power of our indulgences, and we do this by likening our habits to the almost-literally-irresistible urge of the crack addict to smoke more crack.

If we have to understand our new experiences with habitual digital media use in terms of older experiences (and, as much as people love to bad-mouth metaphors as some sort of crutch that keeps us from seeing new things as they truly are, I think this is the only way anything new can be understood at all, at least at first), then food and dieting would probably be a better metaphor than drugs.

Information (that is, the content of all media, digital or otherwise) is like food (and unlike, say, cocaine): we need it to survive. You can't really say no to media any more than you can say no to food. Getting too much of it is bad, but not nearly as bad as injecting large doses of certain drugs. Many more people struggle with tempting foods than with tempting drugs, and I think media over-use and habitual media use should be understood as things that are as common and as benign as bad eating habits, not as rare and harmful as drug addiction.

As time goes on, we'll understand our relationship with digital media on its own terms. But until then, its important to at least consider how our comparisons to other experiences (both in thought and language) affect our perception of threats and responsibilities.

Wednesday, September 04, 2013

I Always Feel Like Somebody's Watching ME

While listening to a new, poppy-sounding, dance-able Nine Inch Nails song about surveillance (and recalling an older, poppy-sounding, dance-able song about surveillance), I thought about how the topic of privacy and surveillance recurs in the news and in people's everyday conversations. Edward Snowden might just be the beginning of more elaborate, widespread considerations of these topics. Certainly, it's something that many young people have an opinion on, and the songs made me think about the ways in which it is woven into themes of popular television, films, and music.

If the Administration is to be believed, the US government isn't actually looking at everyone's data though they possess the ability to look at anyone's data. There's a logic to this: if your goal is to find potential terrorists (or, hell, if your goal is to find juicy bits of gossip), you wouldn't waste your time looking in detail at everyone's data. You'd try to develop algorithms for identifying people of interest and then dig deeper into their data.

Most acknowledge that authorities (governments, corporations) can "watch" us in this sense. Even the authorities themselves acknowledge that it is technically possible. But who is likely to believe that the authorities are telling the truth when they say that they're not looking at them, in particular?

Certainly, the degree to which one trusts authorities plays some role in this. But I think there's something else at play, something that I've been digging into in my recent research: narcissism. To assume that one is being watched, that one is a "person of interest", is to assume that one is interesting enough to be watched. It would follow that a certain type of person who tended to have narcissistic thoughts would be more likely to think that they were being watched and would express greater concern regarding privacy and surveillance.

Those who exhibit narcissistic tendencies in the age of digital media are in a bind. They want to get more information about themselves and their thoughts out there because they believe it's worth hearing, but they might be more concerned about the nefarious use of this information if they assume this information to be valuable (not just incriminating). They're the most exposed and the most concerned, so the thinking goes.

There's some preliminary evidence suggesting that those higher in narcissism have fewer privacy restrictions on some of their social media, which is, in a way, the opposite of what I would expect. Maybe if an individual high in narcissism were primed to think about corporate data miners or government surveillance with a news story, they would be more likely to change their privacy settings than another person. Maybe not.

If nothing else, the iconography and slogans of the pro-privacy movement are telling: Big Brother isn't watching us. He's watching you.




Monday, September 02, 2013

Do you have a Candy Crush Habit or a Candy Crush Addiction?

Most of my recent research concerns media habits and/or what you might call "unconscious" media use. These are the times when we open up a tab on our web browsers and go to a website without thinking too deeply about why we're doing this or considering the long-term value of such an act. Over time, such behavior become habits, and habits can be hard to break. It seems that if habits are sufficiently difficult to break, we call them addictions.

But I, like many others in the field of psychology, am not too keen on applying the term "addiction" to habitual media use. Why not? If you're playing Candy Crush Saga five hours a day and you feel unable to stop playing, what is the difference if we call this a bad habit or an addiction?

Well, I suppose it has to do with how our culture currently understands addiction. We treat it as a disease that requires professional intervention. We assume, as is the case with most diseases, that the afflicted is not responsible for their affliction and that it is unlikely that they can get better on their own. They need help. This diagnosis is well-meaning in the sense that when people are going through something bad (and the feeling of being unable to stop doing something is usually bad) it would be worse to heap the extra guilt that comes with responsibility for their current condition (and for altering the condition) on top of their existing troubles. In addition, professionals have years of experience dealing with addictions and decades of research to help develop systems for fighting addiction.

But there's something that's lost: the individual's sense of self-efficacy, the sense that they can do something about the behavior. In some cases, it's possible that self-efficacy can be an important part of altering habitual/addictive behavior, that the individual finding a way to change their behavior is more effective and efficient than sending all of those individuals to professionals and/or through a series of institutions.

As more people find themselves with habits/addictions to games like Candy Crush, it's important to address the following questions: What role does self-efficacy play in breaking habits? If we call the habit an addiction, does this diagnosis reduce the person's sense of self-efficacy thereby making it harder (or perhaps more expensive) to quit?

This isn't to say that simply labeling this kind of behavior as "habits" isn't without drawbacks. People may not take the threat that their behavior poses as seriously if they call it a habit (most of us have bad habits, after all). Even when we do call the behavior addiction, we're increasingly liberal in our use of that term which waters it down (much like the term "stalking", which is used in a casual, everyday sense).

So, whether we call this kind of behavior "addiction", I think, is not just a matter of semantics. It is possible that diagnosis affects self-efficacy, which affects likelihood of behavior change. Whether you call it addiction or habit, the end game should be the same: understanding how people stop doing things that they, at first, feel they are unable to stop doing. But it's important to recognize the role of words and diagnoses in that process.

Friday, July 05, 2013

The Two Webs

There are two dialogs on human behavior (which includes political, economic, and social behavior) taking place on the web. In effect, there are two webs.

One web consists of data on human behavior and commentary about this data. This one connects some folks in academia to folks in policy circles and the private sector around the world. This is Big Data.

The advantages of this mode of inquiry is that it harnesses the power of new media technologies to provide more information to help improve our predictive power when trying to understand something as enormously complex as individual and collective human behavior. Many of the critiques of quantitative study of human behavior were grounded in the fact that studies simply didn't have enough information to predict and explain the variance in behavior. Whereas other sciences (physics, chemistry) had enough information about a system to predict outcomes within that system, social sciences did not. But if we were to assume, for a moment, that a team of researchers had access to every single bit of information about every human thought, feeling, or behavior for thousands of years, then that team's ability to predict human behavior would be comparable, I think, to those in other sciences. With better predictions come better answers to questions: how best to minimize suffering, or the spread of disease, or human's impact on the environment, or whatever.

Of course, this mode of inquiry is not without its flaws (or at least perceived flaws). The collection and analysis of so much data on human behavior is viewed as being exploitative in some way: those collecting the information benefit and those who are the subjects do not. There are privacy issues: privacy is seen as a prerequisite to mental and emotional health as well as a means of maintaining some power over determining the course of your life (the actual value of privacy would be difficult to determine within this purely quantified conversation about human behavior). There is also the fear that someone with enough information about human behavior will be able to manipulate people to suit their ends (but if those collecting and analyzing data discuss findings freely and don't hoard secrets, this critique doesn't make much sense to me). It can easily be abused by people thinking there's a causal relationship when then there is only a correlational one. Statistics, when misused, create the illusion of certainty. Statistics could always be misused, but the more powerful and widespread they become, the more likely misuse might be and the damage that could be done.

You might call this the rational web. It views behaviors and events as probablistic (or it should, anyway) and it takes into account the degree to which outcomes are affected. If, say, it was determined that people's political party identification determined the amount they were paid when controlling for occupation, abilities, etc., you could find an answer to the question of how much difference it made in terms of pay (maybe Republicans make $4000 more on average when controlling for other relevant variables). In fact, there is an expectation that you answer the "how much" question.

The other web consists of rhetoric: emotional appeals to pre-existing, deeply held beliefs about human behavior. The most commonly used technique here is to select a few emotionally charged stories and try to get the audience to empathize with them. As access to the web has increased, it has become easier to find a subject (that is, the individual at the center of the story) whose situation exemplifies the pre-existing beliefs about human behavior held by the author of the story and the intended audience. Its easier to find the emotionally charged stories, the one or 3 or 100 personal stories that, when you look at parts of them the right way, support your pre-existing belief that, say, capitalism or socialism is harmful or that a certain policy does more harm than good. This technique connects some other folks in academia with the public at large, particularly disenfranchise members of the public worldwide.

Here, I think one possible danger of the wide-spread use of this technique might be that the echo chamber effect (where certain factions become less able to take the perspective of others and become more hostile towards others) gets stronger. Confirmation bias runs amok, and fewer people take into account new information in order to make better decisions. Any holder of an opinion, no matter how wacky, can find others supporting their opinion. This social support, this sense that one is not alone in one's beliefs, is essential to the persistence or propagation of an idea or ideology. Even if one is in the minority, all one has to do is draw an analogy to a group that was in the minority that eventually became the majority (the rebelling colonists in early America or civil rights crusaders in the later 1950's) in order to justify one's beliefs.

You might call this the emotional web. Rather than being probablistic, it is principled. It rarely asks the question "how racist is a statement?" or "how much privacy is being sacrificed?" or "how much freedom is the right amount of freedom"? In this way, it seems irreconcilable with the rational web.

You can't easily categorize certain websites as one or the other. Two of my favorite news sites - the New York Times and Slate - have some stories that appeal to statistics and analyses of statistics and other stories (usually editorials) that appeal to emotion by cherry-picking individual stories. In fact, a journalistic standard seems to be to combine the two: start with an individual's story and then zoom out to the larger trend. Hook the audience with emotion and convince their inner skeptic with data.

Still, I can see, at the very least, certain blogs that are more emotional or more rational, and it would be interesting to see if certain people gravitated toward either emotional/rhetoric arguments or rational/data-driven ones. Last month, I saw a great paper at the International Communication Association's annual conference by Brian Weeks titled "Partisan enclaves or diverse repertoires? A network approach to the political media environment" that suggested that the self-selection ideological bias (dems watch only MSNBC, repubs watch only Fox News) is a misconception and that personal media repertoires are more diverse, at least ideologically, than many believe. They may be diverse (or rather, balanced) in terms of their emotional or rational content as well. But maybe they are not, in which case we really are two different groups of people having two fundamentally different conversations about human behavior. Definitely an avenue worth exploring.

Tuesday, July 02, 2013

Current-cy

What, exactly, do you lose when you lose the internet? This guy didn't use the Internet for a year. What was he missing, exactly? Why did he want to do this? What was he sick of? Maybe he was sick of the present, the constant present.

Generally, I think its totally unproductive (and all too common in academia) to start talking about high falutin' nebulous concepts like "constant present" without grounding it in actual experience. In fact, this whole idea of constant present-ness came to me when I was reading through my students' last media use journals of the spring semester, in which they reflected on their media use habits and considered ways in which they could change them. So it arose from an observation of others' experiences as well as a consideration of what the absence of  some medium/media would be like.

It occurred to me that most media content is "current" or "present" content. It may not always be "news" in the traditional sense. It may concern what is going on in the lives of our friends or something they are thinking about at that moment.

Take a moment to think about every media experience you had today so far. How much of it would fit into these categories (which, I believe, all refer to or are part of the present)?


  • News
  • Events in people's lives that happened within the past week
  • People's reactions to news events (an extension of news)
  • Reactions to what others are saying about their lives (an extended conversation)
  • TV shows as they air for the first time (like Game of Thrones) and the conversations around them

It appears that a lot of our media use is part of a collective experience of the present or a collective conversation about the present.

What about non-current media experiences: movies that came out years ago, novels or research articles or essays from years ago. Do these make up a smaller portion of our media diets and if so, are we any the poorer for it? What's different about these experiences with media experiences that are not directly connected to the present?

These experiences seem somehow more solitary, and perhaps more intimate, to me. As a reader/viewer/listener, you feel a sense of one-on-one connection with the filmmaker, writer, or characters, even if you can't wait to go online and blog or tweet about it later.

You're also a bit more outside the sway of current social forces. Of course, all of our thoughts and feelings are influenced by current trends in thought and the collective mood of the culture, but when you're experiencing some media message from the past, your thoughts are less of a reflection of everyone else's thoughts at that time. Online, we all talk about the same things at the same time, even if we have different viewpoints about those things; it's agenda setting on a much grander scale, applying not just to news but to all aspects of our lives via social media. Stepping outside the stream of the present that we experience via most of our media diet means striking out on our own to find our own topics of interest.

Maybe having a greater portion of our lives comprised of experiences and conversations of current events has no ill effects in and of itself. Maybe all it does is create a thirst for the past, which we associate with permanence, in contrast with the ethereal, unpredictable, novel stream of the present. Maybe this thirst for the past and permanence drives us toward religion with its ancient roots.

The experience of dipping into the past by seeing a great movie that has no connection to the present, if we do this too often, can be escapism, an attempt to hide in another world because things aren't going well in the real, present world, a way of giving up, disconnecting in the worst sense. But dipping into the past, getting out of sync on purpose, has its place.

This is all the more unexpected because we have greater and greater access to media experiences from the past. It is easier than it has ever been before to dip into the past, our own personal past, or entertainment experiences from the past. And yet we do not do this all that often, I suspect, preferring the virtual company of a conversation about the present. The ability to sample the past and to combine ideas from various eras and places, I think, is just as much a creative act as trying to think of something new.

Saturday, June 08, 2013

Why Video?

I like the idea of training all of my students to make short videos that explain a topic. I do this because I believe that being able to create videos to convey your ideas in a concise, compelling, and professional way is just as important a skill as conveying ideas via text. Even though it has been possible for years to create and distribute short videos very cheaply and easily, we seem, within the last year or so, to have reached an inflection point at which many people are making and watching short, compelling videos that explain topics (think of TED talks or the boom in video education precipitated by the rise of Kahn Academy). We have the tools, there's a growing audience for this sort of thing, and the marketplace isn't too crowded yet, so there are many opportunities to make videos about topics that haven't been covered on video yet.

Then I found out about the Journal of Visualized Experiments, the first video journal for peer reviewed scientific research. So, we've reached a point where you can easily make and distribute video about an idea. But why would you choose to make a video about something instead of writing a blog about something? Even though making videos is quite easy, it does take much longer to create than a text-based blog entry. So, is it worth the effort? And what about podcasting, i.e., audio explanations?

Research suggests that video is has a greater effect on learning and feeling about news information than text, but no greater than audio. So there is some utility in making audio or video for this reason. But might video draw an audience that wouldn't otherwise be drawn to the material? My sense is that many people still feel like watching a short video is more entertaining and less of a burden than reading the same amount of information. By creating a video, you are reaching a different (bigger) audience. Sure, somethings lend themselves to visual depictions more than others, but its interesting to consider the appeal of short videos over text essays for any and every topic, now that its possible.

Tuesday, May 21, 2013

Commiseration or Perpetuation

I recently read this article by Adam Waytz about the ways in which one's expectations of an experience (going to the dentist, going to graduate school) can shape one's experience. If you expect to have a bad time, you're going to have a bad time.

Waytz cites the heavily-circulated phdcomics as one way in which, in terms of the ways in which graduate school is depicted or perceived, "negativity runs rampant" online.

This helps to make a more general point about venting online, or venting in general. Noting the negative aspects of life can feel cathartic and, assuming these comments are met with empathy, they can help people feel supported and loved and capable. Similarly, the receiver of a message about some negative aspect of life might not feel so alone in their pain or frustration. There are, I would say, positives to being negative, both in terms of how it makes an individual feel and how it makes the people receiving those negative messages feel.

But it would be foolish to assume that merely because negative comments can result in positive outcomes that they actually do result in these outcomes. There are a few answerable questions that are raised: Under what circumstances do negative comments about some experience result in positive outcomes, for the message generator, the message circulator, or the message receiver? Is it just a question of the frequency with which one posts negative posts on social media (the dose that makes the poison) that results in negative, rather than positive, outcomes? At what point does venting become dwelling? Are there certain types of people who require more company in their misery? Are there others who are dragged down by the negativity more easily? Are some people merely imagining the positive outcomes of negative messages (empathy, catharsis)?

This may be a case in which the study of people's use of social media may help us to understand something bigger than social media: the circumstances under which being negative can be a positive.


Saturday, May 11, 2013

News Literacy: What Counts as Evidence?

During a discussion of the news coverage of the Boston Marathon bombing in the last month of classes, I learned that several students believed (or at least remained open to the belief) that Sandy Hook and the Boston bombing were not perpetrated by the suspects most of the mainstream media (and most Americans, I would guess) blame for the attacks. The students were willing to acknowledge that these are "conspiracy theories," and they seemed aware of the negative connotation of the word "conspiracy", and yet they were at least open to believing in these theories. I hadn't confronted belief in conspiracies in the classroom before. When the Virginia Tech shooting happened while I was teaching at Emerson College, none of my students believed that anyone other than Seung-Hui Cho was responsible.

People often disagree about the meaning of an event. In fact, it seems that most events are "politicized" by individuals inside and outside of the media: it seems as though many people need to initially share in mourning or jubilation around an event, and then orient the event within a partisan framework that makes it easier to understand and integrate with their existing knowledge of the world (I say "seems" because I think the case for the increasingly partisan nature of the populous may be overstated). But this is something different. This is disagreement about particular factual elements of the event itself. It seems healthy to have some dissent about the interpretation of events (though not as healthy to use one's group identity to dictate how one feels about a new event), but having a society in which large numbers of people can't agree on the basic facts of an event seems potentially harmful.

Before, I was able to dismiss "truthers" as a tiny segment of the population unlikely to have any long-term, large-scale effect on anything, as a curiosity for the mainstream media, bloggers, audiences, and comedians to gawk at. But the idea of some of my students being open to these views of reality caused me to consider the phenomenon anew. Why do some people believe these theories? What does it say about trust in authority? Is it related to the number of sources for information? What does it say about how people are persuaded to believe some version of events?

As always, there are pre-existing factors; characteristics of the individuals: the extent to which they trust mainstream media, trust in government, general trust of other individuals. Most of these, I would guess, are related to demographic factors. There is evidence of an overall decline in perceived credibility of major news sources. People who do not trust these news sources always confused me: surely, they believe something about current events. If they don't trust these sources, where do they get their information? Are more and more people performing some selective filtering of the information they receive, believing some of the basic facts while rejecting others? There is a lack of nuance in the data from Pew about what, exactly, information consumers who say they don't trust the media are doing when they gather information about current events. At any rate, I imagine that a lack of trust in others, particularly those in authority positions, accounts partially for the willingness to believe in conspiracies. Being raised in a family or community which was made up of members who never occupied positions of authority probably encourages this kind of thinking.

This brings me to the next factor: The social factor. Humans are social creatures, surprisingly dependent on social cues in forming their beliefs about the world and their behaviors within it. It is thus important for an "alternative" belief about an event to be endorsed by many others in order for individuals to adopt this belief. Once upon a time, most information came to us either from people we knew and trusted (locals, family members, friends, co-workers) or from mass media or the government. There was a kind of mass endorsement that was implied when facts about an event were broadcast via mass media. It wasn't merely a matter of blind trust in the few sources of information. I imagine that people also took into account the fact that many others were seeing what they saw and believing it as they did. Thirty years ago, how did someone who did not believe the mass media's account of the basic facts of an event connect with like-minded others? Through hard-to-find underground magazines? Now, with the blogosphere, it is easier to connect with others who believe these alternate versions of events and what's more, their existence and agreement with these versions are not implied but are real. Of course, people who follow these "alternative versions" blogs know that their view is not a majority opinion, but it is assumed that this group views itself as just ahead of the curve in some important way. There exists some precedent: sometimes, the masses and the mass media have been wrong about the basic facts of events. As long as their are some events in the past that bear some resemblance (no matter how vague) to the current situation, small groups of people can imagine themselves as today's version of those people who knew what was really going on. Again, it is essential that they are a group, and not individuals. Individuals who believe in alternate versions of events are labeled as mentally unstable. Small groups are labeled as conspiracy theorists (one step up on the hierarchy of legitimacy). But if they can reach a certain number of adherents, these groups can gain more legitimacy, perhaps as a political party that contends for positions of power.

But this technique of digging around in the past to cherry-pick a somewhat similar situation in which a beleaguered minority eventually became the majority and likening your own group's plight to theirs represents another interesting piece of the conspiracy puzzle: the rhetorical techniques used to persuade others of their beliefs. My students often point to YouTube videos that show "evidence" that the mainstream account of events must be wrong. There's something important about the way that my students treat video footage and images as more credible than words. Whether it is words, images, sounds, or video, alternative accounts of events lack a few things that mainstream sources of information possess: a stable identity, a long-standing reputation, and expertise. If you were truly interested in determining which of two sources that disagree about the facts of an event is right (if, say, there was something real, immediate, and of great value at stake, like the life of a loved-one), you would try to determine how frequently they have been wrong in the past. Here, the mainstream sources of information and the bloggers are playing two totally different games. Bloggers, for the most part, don't possess any of these. When they tell some version of events that turns out not to be exactly true, they do not suffer for it. They could start a new blog under another name and gain a following by telling a small group of people what they are pre-disposed to believing. But it usually never comes to that. The revelation of the truth is most often endlessly deferred: the cover-up continues. Therefore, the true trustworthiness of these sources can never be judged.

I think those who shuddered as the prospect of photoshopped images of events circulating among a gullible public had it all wrong. The real threat to a single, accepted view of events is cherry-picking and confirmation bias. Any blogger or YouTuber willing to dig around long enough can find images or accounts of past events that bear some resemblance to current events and favor their proposed course of action or set of beliefs. If the trustworthiness of these sources cannot be judged, then there is no penalty for doing this. They're simply playing a different game, with different rules, than mainstream sources of information.

Despite all of this, I remain skeptical that things are any different than they ever were in regards to beliefs about the basic facts of events. Are conspiracy theorists any more numerous now than in the past, or are they merely more visible to the rest of us? Are they in greater conflict with the mainstream view of events than ever before? Whether we're talking about an increase in partisanship or an increasingly powerful and numerous conspiracy lobby, I remain skeptical that things are any worse than they ever were, or that, despite the sound and fury from the conspiracists and those worried about them, they're much of a threat to the social order. But when small, well-informed students start giving these ideas credence, it does make me want to know more about it, to keep an eye on it.

Sunday, March 24, 2013

What were they thinking?

When someone (usually a high-ranking public figure, but sometimes a high-school or college student) tweets something embarrassing or incriminating, the question is often asked: "what were they thinking?" The implication of this rhetorical question, it seems, is not to express some genuine curiosity about their decision making process but rather as a way of calling these people incomprehensibly stupid. The older or less familiar people are with social media and its role in young people's lives, the more likely it seems people are to be unable to understand why someone would do such a thing, or to dismiss the people as simply stupid or bad.

But I am genuinely curious about such behavior, and I think there are some untested assumptions many people make about it. Here are three plausible explanations

1. Ignorance of public/privacy distinction within social media. It is possible that social media users simply don't know that their messages, in some cases, can be read by anyone anywhere at anytime. While this may seem obvious to many, it can be confusing in that not all social media is uniformly public/private. Facebook messages are between two people but could be re-posted in another forum. Facebook wall posts are visible to between 20 or 1000 people who relate to you in different ways depending on your privacy settings and how many friends you have. Twitter has its own privacy settings: not all tweets are public. Instragram is different. So it is not a straightforward obvious fact as to whether social media messages are public or private.

2. User believed it was unlikely that anyone would pay attention to their tweets. This seems like a very common, justifiable explanation for why people say embarrassing or incriminating things in social media, and I think its one that doesn't get much attention. Anyone could read this blog. Does that mean that anyone will read this blog? Of course not. While potential audiences for public tweets are huge, actual audiences are most often very small and familiar. It would be foolish (and conflict with much of what we know about decision making in communication) to assume that people who repeatedly got feedback indicating that their messages were being received by a very small, familiar audience would stay aware of the possibility that others might receive the message. In fact, the odds of these messages reaching those other people are so low that, in a sense, its not really a very bad decision to tweet something a little embarrassing or incriminating. People take into account likelihoods of outcomes when they make decisions. To imagine that they would, in any instance, counter their experience and sensible unconscious calculations of likelihoods with some rule ("don't ever tweet anything that might be seen by your boss or your girlfriend's mother"), doesn't make much sense.

3. Momentary lapse in judgment/self-control. I wonder how many people who do this do it not out of ignorance but because they were drunk, had some momentary lapse in judgment, or were really upset. You would think that people would then delete the tweet or message after they came to their senses (most social media allow you the opportunity to revise history in this way), but maybe they forget about it and/or just don't care.

The likelihood of any of these explanations explaining this behavior depends on people's awareness of how public/private various social media messages are. I think this is an ever-shifting variable, both in terms of how public/private the social media messages actually are (the applications change their privacy settings often) and how aware people are of the current privacy settings (I'd like to think that media literacy classes such as the one I teach at East Carolina University have some influence on this).

I also think that the lack of nuance in our conversations about privacy doesn't help. The knee-jerk reaction to over-sharing seems to be to tell people to imagine that anyone (potential bosses, significant others, exes, parents, grandparents, police, governments) will read/watch what you post online. To me, this is the equivalent of telling kids that "drugs are bad". You put a diverse group of things into one category (because its simply easier to think of things this way), some of which are extremely dangerous, many of which are mildly dangerous. Young people inevitably experiment, usually with the least dangerous things in the group. They suffer little or no immediate negative effects. They conclude that you're not trustworthy, that all drugs/social media posting is fine, and don't learn the distinction between good/kinda bad/REALLY bad until it's too late.

Tuesday, March 12, 2013

Online classrooms: to synchronize or not to synchronize?

Reading Salman Khan's book on the virtues of online learning got me thinking about one particular attribute of the education experience (which, after all, is just another form of communication, one which, if Khan is a sign of things to come, will be increasingly mediated): whether or not teaching/learning takes place synchronously or asynchronously.

Khan makes an excellent point about a problem with synchronous classroom teaching: everyone has to move at the same pace, and if you're a bit slower to understand the concept being taught, the class moves on without you. If students were allowed to learn at their own pace, slower students would get a chance to master the material without slowing down the faster students, and everyone would learn more. There might be other benefits to teaching and learning asynchronously. There's the obvious convenience factor: our leisure, work, and social lives are increasingly fragmented, unscheduled, and asynchronous. Each of us has a different schedule. This difference essentially requires every activity, including teaching/learning, to become asynchronous. Khan also makes the point that a teacher waiting for you to give them the right answer, even in a one-on-one tutoring session, creates pressure that can inhibit thinking.

Having had the experience of a class move on before I mastered the material (quite recently, as a matter of fact), I instantly saw the value in Khan's customizable approach. But as much as I am capable of seeing the downside to a classroom that moves at one pace, I wonder if the synchronous approach motivates students who would otherwise not be motivated to learn the material. Customizability sounds good, and from the (straw-man) economists point of view, it is logical to assume that students who know that they must pass the class in order to get a job they like that pays well will try hard to learn the material. The students are accountable for their performance, and they know that they slack off at their own peril. If customizable, asynchronous learning experiences are a better tool for that (which, I would agree with Khan, they are), then it would follow that students would benefit from their use.

Being in a synchronous learning environment does put pressure on some students: the pressure to keep up. When we take that pressure away, how does this affect student motivation? My intuition, after having spent the past couple of years absorbing research on immediate/delayed gratification and self-control, is that even when some students know they should (or even need to) pay attention and try hard when completing coursework, they will be unable to do so without the pressure to keep up. Its not a question of accountability as much as it is a question of motivation.

I'm reminded of students who form study groups. Some criticize the practice as counter-productive: students spend more time distracting one another than holding one another accountable or helping each other learn the material. But I'd wager that for a certain type of student, such groups (which are kind of a self-imposed synchronous learning environment) are far better than trying to study alone. Traditionally, students have been left to find out whether studying on their own or in groups works best for them, but as someone who will likely be teaching an online course in the near future, I'd like to consider ways of identifying the characteristics of students that would benefit from synchronous learning environments and keep things synchronous for those students while having the others learn at their own pace. It would be a hybrid course in terms of synchronicity/asynchronicity.

I have only begun reading Khan's book, so maybe he addresses this issue of motivation in the new asynchronous learning environment later on. The grand MOOC experiment has already begun, and I'd love to get a look at who drops out of the courses and why. Another point on which I agree with Khan: online learning gives us more data on how people learn than we had before, and this could be a great help to those designing better learning environments online and offline.


Friday, February 22, 2013

Memes: Not really an in-joke anymore

One of my favorite parts about teaching Media Literacy is hearing/reading about what media my students use, what content they enjoy, and how that compares to the experiences of me and my peers. As someone who is roughly twice their age, I don't really expect that we will engage in many of the same types of media experiences. Just as my teachers would make stilted references to M.C. Hammer in order to garner a laugh, I made a reference to Kendrick Lamar's Swimming Pools (... ... drank!) and got a hearty chuckle from the kids. We're in different age cohorts, different life cycles, and we're living in an increasingly fragmented media environment. What could we possibly have in common?

This makes it all the more surprising when I discover that many of them are encountering the same memes as I am. In some cases, we're on the same website, but in others, we're on different sites (or highly personalized versions of the same site, like Twitter and Facebook) that are increasingly comprised of viral jokes that often re-purpose amateur or professional media content in order to comment  on current events or a relatable situation (i.e., memes). Do we watch the same TV shows? No. In fact, I'm willing to bet more students in my classes have the media experience of seeing a Sweet Brown meme in common than will watch the Super Bowl, the Grammys, or the Oscars. Supposedly, these water-cooler TV events would remain a common cultural touchstone, and they likely will be the one thing (along with some big movies) that cut across age groups. But there is something going on with memes that is interesting. They're often originating from relatively tiny communities or obscure sources well outside of the mainstream, but they become the references that my students and I both have in common. If I include a reference to Mad Men in my slides, I'll get blank stares, but a picture of Grumpy Cat gets them laughing every time.

The first thing that occurs to me about this is that at least for certain populations (young people?), media users may not need TV and celebrities as a subject of common experience and conversation, at least to the extent that previous generations did. I think the use of memes is partially substituting for the use of TV and celebrities as a way to joke about norms, blow off steam, bond, etc. Based on my casual observations, I'd say that music and musicians as personalities are just as central to these young people as they were to me and my parents when they were that age. But TV and celebrities? I'm not so sure.

This isn't to say that TV and celebs are going away, but that they may not be as essential to leisure media use as they once were. Perhaps TV has already started to adapt to this, although the meteoric rise of memes to this stage in which they are something that my students and I have in common seems to have happened so suddenly that I doubt anybody has had time to adjust. Like the TV content that serves/served as our common cultural reference point, these memes, ultimately, only serve as a vehicle for advertisers and websites to build audiences to sell stuff to. But the professional content producers have been cut out of the equation. Just how much time is spent creating, consuming, and distributing memes? And if more time is spent re-purposing and creating amateur content, regardless of how solipsistic and retrograde its humor may be, isn't this something worth celebrating?

Wednesday, February 20, 2013

What Future Does Professional Media Content Production Have?


In the Media Literacy class I’m teaching this semester, my students are engaging in a role-playing exercise in which they assume the roles of four groups that, traditionally, play a role in the development of a new medium: governments, advertisers, technology developers, and content producers. First, the students research the role that these groups played in the creation and popularization of print, radio, TV, film, internet, etc. Then, they form new groups comprised of representatives from each of these groups and discuss how to develop a heretofore un-developed medium. The first example I thought of for the un-developed medium is: virtual reality.

Most of us have an idea of what virtual reality could be. And this imagined reality of VR could have government regulation (determining how the content will be distributed, if the roll-out of VR will be subsidized like a utility, how to regulate violent/sexual content, etc.), advertising (product placement in virtual reality? Pretty much an advertiser’s dream!), and technology developers (Apple VR might have a cleaner look than Microsoft’s somewhat cluttered-looking VR), and content producers (custom-made luxury environments for you to relax in). Sounds like a good fit!

I’m going to do this exercise again later in the semester, and I’m having no trouble coming up with several other “media technologies of tomorrow”: augmented reality glasses, superior surveillance technology, a portable instant-fMRI machine, affordable 3D printing. Some of these technologies are already gaining a foothold in the market. It’s easy to see how governments, advertisers, and technology developers would be involved in the creation and development of these technologies. But where would the content producers fit in?

The more I thought about the exciting media technologies of the future, the more trouble I have thinking about how professional content producers (e.g., screenwriters or the equivalent) will fit into the picture. I’m quite confident that there will always be an appetite for well-told stories. People skilled at telling these stories, through words or pictures or sounds, will have a place in our media environment. But I suspect that people will devote less time to consuming those stories than in years past. During the golden age of radio and television, people spent hours every day consuming content created by professionals. Increasingly, we spend more and more time using Facebook, Twitter, and other activities that don’t involve much in the way of content production (yes I know, lots of conversations on FB & Twitter are about content produced by professionals, but still, most of the aggregate value of these sites, I would contend, is generated by the users and the creators of the venue, i.e., the technology developers). In thinking about the media technologies of the future, it’s hard to think of a place for the writers, the producers, the directors. I’m sure there will be a handful of greats who produce content we all talk about, but perhaps a shrinking middle ground, and a shrinking window of attention and time we all spend consuming professionally produced content. 

Aside from making my group role-playing project a bit more difficult to design, its hard to think of a downside to this future. Definitely something I'll come back to in class. 

Sunday, February 17, 2013

A media dieting manifesto

A majority of American adult Facebook users have taken a voluntary break from using Facebook, according to a recent Pew Research poll. Though only 8% of these people reported doing so because they felt they were spending too much time on the site, 38% of people 18-29 expressed a desire to spend less time using the site next year. So the desire to use less of some media to which we have access is there and, I think, it is likely to grow. But what are we doing about it, other than taking short breaks? Not much. Not yet.

I know its a bit daft to speculate about media use in the future, but since this is a blog and not a job talk, here is my prediction: in the next 5 years, more than 50% of internet users in the United States over the age of 22 will use some form of self-restriction from media to which they would normally have access. Either they will use software restricting their use of the internet or phone or use a self-imposed schedule in which they do not allow themselves to use the internet, certain applications or website, or their phone.

This is the start of an era in which we (adult Americans, perhaps others as well) look at our intake of information and social contact similarly to the way that we look at our intake of food. Food dieting is a billion dollar industry, one that is notorious for generating quick fixes that do not work out in the long run. This failure to generate lasting solutions to a public health problem is unsurprising to anyone who has reviewed the literature on habit and self-control. Habits are extremely robust; recidivism is the norm. Everything from cues in our environment to the chemistry of our brains cause habits to persist in the face of repeated attempts by the individual and others to change them. And yet sometimes, behaviors (particularly long-standing habits) change, permanently.

Though there are similarities between our developing view of media use as a kind of guilty pleasure or alluring, potentially addictive activity and our relationship with unhealthy foods, there are important differences. These differences, I believe, make it easier to alter media use behavior than to change our eating habits, provided we use the tools at our disposal.

First, our media use behavior is easier to passively track, far easier than counting calories. Chances are that right now, you could access information about how much and what type of media you are using just by looking at your browser history, on your laptop, tablet, or phone. More sophisticated tracking software is certainly available. That data may just make us feel guilty when we look at it, but if we know what to do with it, it can help us understand more precisely why we fail at media self-regulation and how we can change our use for the long term.

Second, the same technologies that brings so many tempting, immediately gratifying options in close temporal and physical proximity to us can also deliver us from this problem. They provide a sufficiently motivated media user the opportunity to alter the timing and amount of access they have to many different kinds of media. As in the world of food consumption, the individual and the self-regulation industry is in a never-ending battle with those making and promoting tempting options. The more we try to regulate our environment, the more insidious their pitches become. But the fundamental malleability of new media, the bottom-up nature of Code, makes it difficult for the purveyors of temptation to maintain a direct line to our Ids for very long, at least more difficult than it is for advertisers in the still top-down universe of food production, promotion, and consumption in the US.

So far, those of us who have bothered to use or enhance these tools haven't yet used them in the most effective way. The first stabs at internet self-regulation technologies (SelfControl, Freedom, Leechblock, StayFocusd) are all, in some sense, overcompensating for our newly empowered Ids. By totally restricting us from all media for a time period, these programs (or the strategy of "unplugging" that seems to be popular among certain crowds these days) leads to reactance, ultimately leading to workarounds (finding a computer that doesn't have the restricting program installed, justifying a little cheating here and there, etc.).

The way forward, I believe, is "nudging" (a la Sunstein and Thaler): designing our information environments so that they do not deprive us of access to all tempting options at any point, but instead creating menus comprised of both tempting options and less-tempting options that benefit us in the long-term. Information and social contact is available in various combinations on a regular schedule so that we are not so utterly deprived of "fun" things that we break our diets. Each combination will be specifically calibrated to each individual (customizability being another virtue of new media) to maximize not only productivity but happiness, social responsibility, or whatever the individual's long-term goals entail.

I make this seem simpler than it is. But that's typically what manifestos do, right? There is much research to be done.

Tuesday, January 29, 2013

Lyrics: Not as violent/profane/"offensive" as they used to be?

There's an assumption that the number and intensity of references to sex and violence and otherwise "offensive" content has a certain upward trajectory across the years: each generation of parents complains about how media content is somehow worse than it was when they were young: more anti-social, dumber, or more harmful in some way. If you look at ALL media experiences, this trend seems to be holding. Yesterday, parents worried about rock and roll music and comic books, today they worry about video games and Facebook. Mind you, I don't think that this cycle of worry has anything to do with the actual effects of these media experiences: most of the people on either side of these moral panics aren't very interested in empirically testing their claims. Nevertheless, content analyses of media experiences would, I think, show a kind of evolution over time, with some taboo activity (typically related to sex or violence) getting more numerous, intense, and realistic as time goes on.

But if you look WITHIN a certain medium, like popular music, I'm not sure the trend holds. Have the lyrics of songs gotten "worse" - more references to sex, drugs, violence, other taboo subjects - over the past 20 years? Two things make me think they have not obeyed this upward trajectory.

1: personal experience. I admit, I'm an old fogey. I'm 36, I mostly listen to music that was written last century (and occasionally before that), but I listen to pop radio on a semi-regular basis and absorb aspects of most cultural trends in the way that any user of meme-based entertainment websites does. I can still detect the shifts in popular genres of music (the ascendance of EDM for example). Of course, there are small sub-genres I'm completely unaware of, but that was always the case: there were always musical niches listened to by the minority of listeners, and often those were the places where future popular music would arise from. But I'm willing to bet that even if you sampled music from every genre you could find, even the small ones, the lyrics would not be any "worse" than they were in the 1990's. So I guess I'm making two claims: the "worst of the worst" in lyrics today is no worse than it was in the 90's, AND the average popular song is no "worse" in lyrics today than it was at that point. The extent to which lyrics of popular music were anti-social and taboo-violating peaked in the 90's (but maybe I'm just saying this because it was the time when I came of age, when I was rebelling, seeking out the "worst" lyrics I could find).

2. Crime rates in the United States. I recall that many people - pundits, scholars, and everybody else - looked at the trend in violent crimes in the US during the 20th century and assumed that the upward trend would just continue. But it didn't. Murder rates, burglary, you-name-it have all fallen since the 90's. There are many reasons for this, I'm sure, but it made me think about lyrics and certain assumptions about lyrics simply because it was another case of people being mistaken about a trend continuing unabated. I'm not saying these two trends (assuming the lyrical trend is real), which would be more-or-less simultaneous, are causally related (though if I had to speculate, I would say that the lyrics reflect social reality and not the other way around, or its a reciprocal relationship). It just got me thinking that trends are often part of cycles, not inexorable laws with one trajectory.

Of course this is just a hunch. I'm looking for a quick way to do a little content analysis to test this hunch. There are a lot of useful databases out there now, so I assume it could be done.

To return to my point about lyrics and their place in the media environment: I get the sense that teen rebellion (or just rebellion in general) is no longer the domain of popular music. Lyrics went about as far as they could go in terms of violating taboos. The internet clearly allows for more rebellion, more anti-social behavior (not only in word, but in deed), which, I suppose, underscores the importance of taking a holistic view of media use, whether you're a parent, a scholar, or both.

Sunday, January 27, 2013

What is Wikipedia now?

I was searching for information on a current event for class discussion: the Sandy Hook Elementary School shootings. Before, I was assigning a reading to another class: the wikipedia entry on Bandura's Bobo Doll study (I didn't feel as though students were ready to read an academic article from the early 60's, and I thought that the wikipedia article did a good job of distilling the essence of the article and doing so in a manner that most undergrads could understand and remember it). I felt a bit guilty about using Wikipedia in these ways. I wasn't holding up Wikipedia as an object of study (how is it created? Who creates it? How trustworthy is it relative to other sources?), but instead using it as if it were a legitimate source of information.

Upon reflecting on these feelings of guilt, I thought about how Wikipedia has probably changed over the past few years, since gaining prominence. Initially, the question was "is Wikipedia as reliable as published sources?" The assumption was that, because it is open to be edited by anyone, it could never be as reliable as published, vetted sources. As far as I know, it is still as open to change as it was back then (though perhaps the structure of correcting errors and detecting bias has been improved), but I think its wrong to assume that openness determines the extent to which any document is trustworthy. The factor that people should be paying attention to is motivation to introduce bias.

Think about other sources of information on the web. Is there some motivation to introduce bias to the information? In most case, there is some motivation. Maybe it is to please corporate shareholders, or to retain a certain audience so that they can get more views within that niche and generate profit that keeps them afloat. With Wikipedia, there are some particular topics that are edited frequently by interested parties, but as I understand it, the system has a way of detecting those and flagging those. But for my purposes, for using it in class as a way of getting the facts straight about Sandy Hook, or learning about Social Learning Theory, doesn't it present a much more even-handed, complete summation of these entities than any other available source on current events?

Maybe it wasn't always this way. There might have been a contingent of people adamant on shaping public perception of some event or entity that systematically altered the Wikipedia entries on those things. That was when there was some public conversation about the trustworthiness of Wikipedia entries. But now that that conversation has dropped out of the news cycle, out of the public consciousness, are those interested parties still as numerous, and as interested? These things have implications for the true trustworthiness of the content on Wikipedia. I am skeptical that people intent on fooling people retained their passion for altering Wikipedia entries, or, in that passion, eclipsed the Herculian efforts of volunteers working for Wikipedia to keep it free of disinformation and bias. But who knows?

Thursday, January 03, 2013

The Werther Effect 2.0: Psychogenesis in the Internet Age

One of my favorite films from the last few years starts and concludes with an idea that, aptly, took root in my brain and has been something I think about long after I saw the film for the first time. It is articulated by the main character Dom Cobb: "An idea is like a virus, resilient, highly contagious. The smallest seed of an idea can grow. It can grow to define or destroy you." Soon after seeing this film, I learned about the Werther effect. 

The idea of the Werther Effect is simple and unnerving. When people are informed through news reports or fictional stories of the fact that other people have committed suicide, they are more likely to commit suicide themselves. The message (e.g., a news report of suicide or suicide statistics or a story about someone committing suicide) likely makes the message receiver believe that the act in question is something that people, perhaps people similar to him or her, do. That isn’t to say that it makes the act socially acceptable or condones the act. By conveying that it has happened and that someone else did it, the message makes the act salient to the receiver, introduces it as a possible course of action, and may imply that it is common. Some theorists say that it may just be a simple suggestion-imitation model of behavior. In either case, it is more likely that the message receiver will commit suicide than if they had not heard the message at all (see Etzersdorfer, Voracek,& Sonneck, 2004; Gould, 2001Hittner, 2005; Phillips 1978, 1979; 1982; Wasserman, 1984 for details, Thorson & Oberg 2003 for a contrary view).

In the top-down world of mass media production, news editors have been encouraged to downplay suicides, making them less likely to inspire readers to commit suicide. But if we assume that the same relation holds between exposure to the message and suicide in the age when exposure is "viral" (i.e., spread through online social networks) and/or search-based (i.e., determined by in-the-moment interests), then how do you curb imitative suicide? The story of someone’s suicide might spread through a group of people who are already preoccupied with the idea or seek out this kind of information. It may spread through people who are trying to offer support. 

This might be part of a more general problem in the age of the internet (where messages reach people through social networks or based on their consistency with people’s pre-existing pre-occupations/preferences). There might be other kinds of behaviors (anorexia, cutting one’s self, abusing drugs) and even other ways of thinking (depressive thought patterns) that are similar to suicide in that they are capable of being directly influenced by messages (i.e., not entirely physiological in origin), they are generally considered to be undesirable (i.e., painful for the individual and community), and are stigmatized (that is, not talked about). This particular kind of stigmatization makes these behaviors and thought patterns different from acts of aggression toward others, which, as a culture, we seem to have no problem discussing. This stigmatization leads, in certain communities, to an attempt to overcome the stigma through communication, through sharing experiences with the behavior or thought pattern and through support, to let people know that they are not alone. This reaction to the silence of the stigma is unquestionably well intentioned and it may have positive effects, but the evidence of suicide contagion makes me wonder if there are unanticipated effects. It may make message receivers who do not yet exhibit the undesirable behavior or thought pattern (or who do exhibit it but to a lesser extent than the extreme examples they hear about) more aware of the negative behavior or thought pattern, believe that it is more common or that what they are experiencing is more serious or more intense than they would have otherwise thought, and believe that it is something that people like them do or, worse yet, something that people are (as in: a permanent, defining characteristic, e.g., a depressed person, an alcoholic, etc.). 

Possible stimuli that may lead to a negative behavior or thought pattern becoming more common include advertisements for drugs or programs treating depression, anxiety, and other psychological conditions; online support groups; social media messages (Facebook and Twitter posts) related to the topic; news stories encountered via major news sites (e.g., NYTimes) or circulated virally via social media (e.g., Twitter). Make no mistake: there is plenty of evidence to suggest that these tools are useful in combating the spread and intensity of suffering and so they should not be discarded unless there is evidence that they do more harm than good. Its unrealistic to think that they ALL do more harm than good, but the evidence of suicide contagion via mass media suggests that it is worth examining whether some of thee messages, under certain circumstances for certain people, may, in fact, backfire. Establishing whether or not any messages designed to curb suffering that is not physiological or physical in origin backfire is the first step. Then, it is essential to understand why this happens, when this happens, and be able to modify the messages so that it doesn't happen anymore: all possible given the wealth of data on Internet use.