Here's a question about addiction: to what degree does it depend on the environment, as opposed to the brain of the individual?
We tend to think of addiction as something that lives in the brain of an individual. Addiction is, in part, inherited. We can see it when we look at images of addicts' brains. But as I understand it, genes only predispose someone to becoming more easily addicted than other person. And yes, you can see differences between addicted brains and non-addicted brains, but these images don't necessarily tell you to what degree the behavior and its neuro-chemical correlates are products of genes, of habit (i.e., repeated behavior in the past), or of the environment (specifically, the array of options and stimuli one has in front of one's self). So neither of these bits of information really tell us all that much about the relationship of the addict to the choice environment.
Perhaps addiction is the name for a behavior that is less and less responsive to the environment. Instead of responding to the negative consequences of choosing to behave in a certain way (hangovers, social disapproval, damaged relationships, loss of professional status, etc.), the addict continues to repeat the behavior. The more addicted one is, the less it matters what goes on around them. All that matters is the repetition of the behavior.
But maybe we overestimate addicts' immunity to characteristics of their environment. Many approaches to stopping the compulsive behavior associated with addiction attempt to alter the way an individual responds to their environment. This is done through therapy, drug treatment, or other means. But in other cases, we try to alter the behavior of addicts by changing the environment itself: making them go cold-turkey, or removing certain cues in the environment that trigger the behavior. Many times, these approaches don't work. Addicts are able to find the substance to which they are addicted or engage in the behavior again, and its hard to remove ALL triggers in an environment.
But when we think about the environmental-manipulation approach to altering addictive behaviors, maybe we're not thinking big enough. What if we had an infinite amount of control over the environment? We could populate the environment with many other appealing options instead of merely removing the one that is preferred by the addict. Whether or not the addict relapses after being deprived of whatever it is they're addicted to would depend not only on how long they're deprived of it but also on what their other options are. What if we could take an addict who was down and out and plunk them down in a world in which they have many other opportunities for challenging, fulfilling accomplishment, nurturing, nourishing relationships, and spiritual and emotional support? I think that in many cases, the behavior would change, permanently. So addiction really depends on things outside of the individual's brain, but its hard to see this when our attempts to assess the efficacy of such approaches have been so modest.
Of course, it is difficult if not impossible to just plunk someone down in that perfectly challenging, supportive world. But the amount of control an individual has over the stimuli in their environment has changed. In particular, our media environments can be fine tuned in many different ways, though at present they just end up being tuned to suit our need for immediate gratification. We could, in theory, fine tune that environment to gradually ween someone off an addictive stimuli such as a video game, a social networking site, or the novel, relevant information provided by the Internet in general, and replace it with something that satisfies the individual in some way. This would be much harder to do with other addictions, like alcohol. You can't re-configure the world to eliminate all advertisements for alcohol, all liquor stores, all depictions of the joys of being drunk. It's simply harder to alter those aspects of the environment. Because it was so difficult to even try these environmental manipulation approaches to altering behavior, we haven't fully realized their potential.
By fine tuning media environments (rather than just demand that media addicts go cold turkey or other "blunt instrument" approaches to media addiction), I think we'll realize that media addicts are more responsive to their environments than previous thought. I'm not saying that media environments are infinitely manipulable; only that we haven't realized the full potential (or even really scratched the surface) of this approach to halting media addictions.
Tuesday, April 08, 2014
Monday, April 07, 2014
(Mis)Understanding Studies
Nate Silver and his merry band of data journalists recently re-launched fivethirtyeight.com, a fantastic site that tries to communicate original analyses of data relating to science, politics, health, lifestyle, the Oscars, sports, and pretty much everything else. It's unsurprising that articles on the site receive a fair amount of criticism. In reading the comments on the articles, I was heartened to see people debate the proper way to explain the purpose of a t-test (we're a long way from the typical YouTube comments section), but a bit saddened that the tone of the comments made them seem more like carping and less like constructive criticism. Instead of saying someone is "dead wrong", why not make a suggestion as to how their work might be improved?
One article on the site got me thinking about a topic I've already been thinking about as I begin teaching classes on news literacy and information literacy: how news articles about research misrepresent findings and what to do about this phenomenon. The 538 piece is wonderfully specific and constructive about what to do. It provides a checklist that readers can quickly apply to the abstract of a scientific article, and advises readers to take into account this information, along with their initial gut reaction to the claims, when deciding whether or not to believe the claims, act on them, or share the information. It applies to health news articles in the popular press, but I think it could be applied to articles about media effects.
Now, the list might not be exhaustive, and there might be totally valid findings that don't possess any of the criteria on the list, but I think this is a good start. And really, that's what I love about 538. I recognize it has flaws, but it is a much needed step away from groundless speculations based on anecdotes that are geared toward confirming the biases of their niche audience (i.e., lots of news articles and commentary). And they appear to be open to criticism. Through that, I hope, they will refine their pieces to develop something that will really help improve the information literacy of the public.
The piece got me thinking about the systematic nature of the ways in which the popular press misleads the public about scientific findings. They tend to follow a particular script: The researchers account for most likely contributors to an outcome in their studies and test these hypotheses in a more-or-less rigorous fashion. The popular press does not mention the fact that they accounted for certain possible contributing factors because of limited space and the need to attract a large, general audience. When people read the news article about the research study, they think "well, there's clearly another explanation for the finding!" But in most (not all, but most) cases, researchers have already accounted for whatever variable you imagine is affecting the outcome.
In other cases, the popular press simply overstates either the certainty that we should have about a finding or the magnitude of the effect of one thing on another thing. Again, if we look at a few things from the original research article (like the abstract and the discussion section), we should be able to know whether or not the popular press article was being misleading, and we wouldn't even have to know any stats to do this.
The popular press benefits from articles and headlines that catch our eyes and confirm our biases. That's just the nature of the beast. Instead of just throwing out the abundant information around us, it's worth developing a system for quickly vetting it, and taking what we can from it.
One article on the site got me thinking about a topic I've already been thinking about as I begin teaching classes on news literacy and information literacy: how news articles about research misrepresent findings and what to do about this phenomenon. The 538 piece is wonderfully specific and constructive about what to do. It provides a checklist that readers can quickly apply to the abstract of a scientific article, and advises readers to take into account this information, along with their initial gut reaction to the claims, when deciding whether or not to believe the claims, act on them, or share the information. It applies to health news articles in the popular press, but I think it could be applied to articles about media effects.
Now, the list might not be exhaustive, and there might be totally valid findings that don't possess any of the criteria on the list, but I think this is a good start. And really, that's what I love about 538. I recognize it has flaws, but it is a much needed step away from groundless speculations based on anecdotes that are geared toward confirming the biases of their niche audience (i.e., lots of news articles and commentary). And they appear to be open to criticism. Through that, I hope, they will refine their pieces to develop something that will really help improve the information literacy of the public.
The piece got me thinking about the systematic nature of the ways in which the popular press misleads the public about scientific findings. They tend to follow a particular script: The researchers account for most likely contributors to an outcome in their studies and test these hypotheses in a more-or-less rigorous fashion. The popular press does not mention the fact that they accounted for certain possible contributing factors because of limited space and the need to attract a large, general audience. When people read the news article about the research study, they think "well, there's clearly another explanation for the finding!" But in most (not all, but most) cases, researchers have already accounted for whatever variable you imagine is affecting the outcome.
In other cases, the popular press simply overstates either the certainty that we should have about a finding or the magnitude of the effect of one thing on another thing. Again, if we look at a few things from the original research article (like the abstract and the discussion section), we should be able to know whether or not the popular press article was being misleading, and we wouldn't even have to know any stats to do this.
The popular press benefits from articles and headlines that catch our eyes and confirm our biases. That's just the nature of the beast. Instead of just throwing out the abundant information around us, it's worth developing a system for quickly vetting it, and taking what we can from it.
Subscribe to:
Posts (Atom)