Reading Jill Lepore's review of Michael Patrick Lynch's new book, The Internet of Us, reminded me to write something on the topic of truth. I haven't read Lynch's book yet, but even the sub-title ("Knowing more and understanding less in the age of big data") gave me that all-too-familiar twinge of jealousy, of feeling as though someone had written about an idea that had been gestating in my mind for years before I had the chance to write about it myself, of being scooped. So, during this brief lacuna between the time at which I learned of this book's existence and the time at which I actually read it, let me tell you what I think it should be about, given that title. That is: how is it possible that the Internet helps us to know more and to understand less? Or, to take Lepore's tack, what is the relationship between the Internet/Big Data and the truth/reality?
At this stage, I have only a semi-organized collection of ideas on the topic. I'll base each idea around a question.
To what extent has truth (or reality) become subjective in the age of the Internet/Big Data?
I think we vastly overestimate the extent to which the Internet has fragmented our sense of truth and/or reality. And by "we," I mean most people who think about the Internet, not just scholars or experts. My sense is that it is a commonly held belief that the Internet allows people access to many versions of the truth, and also that groups of people subscribe to the versions that fit their worldviews. This assumption is at the core of the "filter bubble" argument and undergirds the assertion that the Internet is driving fragmentation of polarization of societies.
I contend that most people agree on the truth or reality of most things, but that we tend not to notice the things we agree on and instead focus on the things on which we do not agree. Imagine that we designed a quiz about 100 randomly selected facets of reality. We don't cherrypick controversial topics. It could be something as pedestrian as: "what color is they sky?" ; "if I drop an object, will it fall to the ground, fly into the sky, or hover in the air" ; "2 + 2 = ?" I'd imagine that people would provide very similar answers to almost all of these questions, regardless of how much time they spend on the Internet. Even when we do not explicitly state that we agree on something, we act as though we believe a certain thing that other people believe as well. We all behave as if we agree on the solidity of the ground on which we walk, the color of the lines on the roadways and what they mean, and thousands of other aspects of reality in everyday life.
The idea that reality or truth is becoming entirely subjective, fragmented, or polarized is likely the result of us becoming highly focused on the aspects on which we do not agree. That focus, in turn, is likely the result of us learning about the things on which we do not agree (that is, of us being exposed to people who perceive a handful of aspects of reality in a very different way than we perceive them) and of truth/reality relating to these handful of aspects genuinely becoming more fragmented. Certainly, it is alarming to think about what society would look like if we literally could not agree on anything, either explicitly or implicitly; so, there is understandable alarm about the trend toward subjectivity, regardless of how small and overestimated the trend may be.
So, I'm not saying that that truth/reality isn't becoming more fragmented; I'm only saying that part of it is becoming that way, and that we tend to ignore the parts that are not.
It's also worth considering the way in which the Internet has unified people in terms of what they believe truth/reality to be. If we look at societies around the globe, many don't agree on aspects of world history, how things work, etc. Some of those people gained access to the Internet and then began to believe in a reality that many others around the globe believe in: that certain things happened in the past, that certain things work in certain ways. Reality and truth were never unified to begin with. The Internet has likely fragmented some aspects of reality and the truth for some, but it has also likely unified other aspects for others.
Maybe I'm just being pedantic or nit-picky, but I think any conversation about the effects of the Internet on our ability to perceive a shared truth/reality should start with an explicit acknowledgment that when people say that society's notion of truth/reality is fragmented, they actually mean that a small (but important) corner of our notion of truth/reality is fragmented. Aside from considering the net effects of the Internet on reality (has it fragmented more than it unified?), we might also consider this question:
What types of things do we agree on?
Are there any defining characteristics of the aspects of truth/reality on which we don't agree? When I try to think of these things (things like abortion, gun rights, affirmative action, racism, economic philosophy, immigration policy, climate change, evolution, the existence of god), the word "controversial" comes to mind, but identifying this category of things on which we don't agree as "controversial" is tautological: they're controversial because we don't agree on them; the controversy exists because we can't agree.
So how about this rule of thumb: we tend to agree on simple facts more than we agree on complex ones. When I think of the heated political discourse in the United States at this time, I think about passionate disagreements about economic policy (what policy will result in the greatest benefit for all?), immigration (ditto), gun rights (do the benefits of allowing more people to carry guns [e.g., preventing tyrannical government subjugation, preventing other people with guns from killing more people] outweigh the drawbacks (e.g., increased likelihood of accidents; increased suicide rates], and abortion (at what point in the gestational process does human life begin?). These are not simple issues, though many talk about them as if the answers to the questions associated with each issue were self-evident.
I can think of a few reasons why truth/reality around these issues is fragmenting. One is, essentially, the filter bubble problem: the Internet gives us greater access to other people, arguments, facts, and data that can all be used by the motivated individual as evidence that they are on the right side of the truth. In my research methods class, I talk about how the Internet has supplied us with vast amounts of data and anecdotes, and that both are commonly misused to support erroneous claims. One of these days, I'll get around to putting that class lecture online, but the basic gist of it is that unless you approach evidence with skepticism, with the willingness to reach a conclusion that contradicts the one you set out to find, you're doing it wrong. Dan Brooks has a terrific blog post about how Twitter increases our access to "straw men." So, not only does the Internet provide us with access to seemingly objective evidence that we are right; it also provides an infinite supply of straw men with which to argue.
In these aforementioned cases in which we disagree about complex issues, we tend not to disagree about whether or not something actually happened, whether an anecdote is actually true or whether data is or is not fabricated. Most disagreements stem from the omission of relevant true information or the inclusion of irrelevant true information. We don't really attack arguments for these sins; we tend not to even notice them, and instead talk past each other, grasping at more and more anecdotes and data (of which there will be an endless supply) that support our views.
If it is the complex issues on which we cannot agree, then perhaps the trend toward disagreement is a function of the increasing complexity and interdependency of modern societies. Take the economy. Many voters will vote for an elected official based on whether or not they believe that the policies implemented by that official will produce a robust economy. But when you stop and think about how complex the current global economy is, it is baffling how anyone could be certain that his or her policies would result in particular outcomes. Similarly, it is difficult to know what the long-term outcomes of bank regulations might be, or military interventionalism (or lack thereof). Outcomes related to each issue involve the thoughts, feelings, and behaviors of billions of people, and while the situations we currently face and those we face in the future resemble situations we've faced in the past (or situations that economists, psychologists, or other "ists" could simulate), they differ in many others that are difficult to predict (that's simply the nature of outcomes that involve billions of people over long periods of time). And yet we act with such certainty when we debate such topics! Why is that? This leads to my last question:
Why can't we arrive at a shared truth about these few-but-important topics?
First, there is the problem of falsifiability. Claims relating to these topics typically involve an outcome that can be deferred endlessly. For example, one might believe that capitalism will result in an inevitable worker revolution. If the revolution hasn't occurred yet, that is not evidence that it will never occur; only evidence that it hasn't occurred yet. There's also the problem of isolating variables. Perhaps you believe that something will come to pass at a certain time and then it doesn't, and you ascribe the fact that it doesn't to a particular cause, but unless you've made some effort to isolate the variable, you can't rule out the possibility that the cause you identified actually had nothing to do with the outcome.
There are falsifiable ways of pursuing answers to questions relating to these topics. And despite all the hand-wringing about the fragmentation of truth/reality on these topics, there are also plenty of folks interested in the honest pursuit of these answers; answers that, despite the growing complexity of the object of study (i.e., human behavior on a mass scale), are getting a bit easier to find with the growing amounts of observations to which we have access via the Internet.
The other problem is the lack of incentive to arrive at the truth. Often times, we get immediate payoff for supporting a claim that isn't true, in the form of positive affect (e.g., righteous anger, in contrast to the feeling of existential doubt that often comes with admitting you're wrong) and staying on good terms with those around you (admitting you're wrong is often inseparable from admitting that your friends, or family, or the vast majority of your race or gender or nationality are wrong). So, there are powerful incentives (affective and social) to arrive at certain conclusions regardless of whether or not these are in line with truth/reality. In contrast, the incentives to be right about such things seem diffuse. We would benefit as a society and a species if we all right about everything, right?
I suppose some would argue that total agreement would be bad, that some diversity of opinions would be better. But we don't tolerate diversity of opinion on whether or not the law of gravity exists, or whether 2+2 = 4. Why would we tolerate it in the context of economic policy? Is it just because of how complex economies are, and that to think you have the right answer is folly? (I suppose that's a whole other blog entry right there, isn't it). But certainly, even if you believe that, you'd agree that some ideas about economies are closer to or further from the truth and reality of economies. So, perhaps what I'm saying is that if we lived in a society where "less right" ideas were jettisoned in favor of "more right" ideas, we would all benefit greatly, but that the benefits would only come if a large number of us acted on a shared notion of the truth and that the benefit would be spread out among many (hence, "diffuse").
But what if there were an immediate incentive to be right about these complex issues, something to counter the immediate affective and social payoffs of being stubborn and "truth agnostic?" I love the idea of prediction markets, which essentially attach a monetary incentive to predictions about, well, anything. You could make a claim about economic policy, immigration policy, terrorism policy, etc., and if you were wrong, you would lose money.
Imagine you're a sports fan who loves a particular team. You have a strong emotional and social incentive to bet on your team. But if your team keeps losing and you keep betting on your favorite team, you're going to keep losing money. If you had to participate in a betting market, you'd learn pretty quickly how to arrive at more accurate predictions. You would learn how to divide your "passionate fan" self from your betting self. And if you compare the aggregate predictions of passionate fans to the aggregate predictions of bettors, I'd imagine that the latter would be far more accurate. I would assume it would work more or less the same way with other kinds of predictions. People would still feel strongly about issues and still be surrounded by people who gave them a strong incentive to believe incomplete truths or distorted realities. But they would have an incentive to cultivate alternate selves who made claims more in tune with a shared reality.
Of course, not all issues lend themselves to being turned into bets (how would one bet on whether or not life begins after the first trimester?), but it still seems like, at least, a step in the right direction, and gives me hope for how we can understand the truth and our relationship to it in the Internet age, perhaps even better than we did before.
No comments:
Post a Comment