Friday, August 25, 2023

An ethical case for using A.I. in creative domains

A few months after first considering the promise and threat of A.I. in creative domains, it's still the threats that are getting the most attention. I tend to hear less about the possibility that by allowing A.I. to be used widely (which helps it grow more sophisticated) we are hastening a machine-led apocalypse and more about what we would lose if we replaced human writers with A.I. It would be an obvious loss for the people who write for a living, but they make the case that it would be a loss for society. Creativity would decline, mediocrity would flourish, and we would lose the ineffable sense of humanity that makes great art. By taking the power to create out of the hands of the many writers and putting it in the hands of the few big tech companies, we would exacerbate inequality and consolidate control over culture. 

There are a few steps in this hypothetical process worth scrutinizing. First, this argument assumed that if A.I. is allowed to be used in a creative field (screenwriting, journalism, education), it will necessarily lead to the replacement of human labor. There's a market logic to this: if you owned a company and you could automate a process at a fraction of the cost of paying someone to do it, you would have to automate it. If you didn't, your competition would automate it, be able to produce an equivalent good or experience at a lower cost, charge consumers less for it, be a better value to shareholders as a publicly traded company, and put you out of business. You could point to examples of such things happening in the past as evidence of this logic (though I have to admit, I found it hard to find examples that used human communication rather than physical labor. I'd assumed chatbots had led to steep declines in customer service labor, but all I could find was editorials about how it will lead to steep declines and competing editorials about how customers find chatbots enraging and still demand human customer service agents). 

But I still have trouble thinking of this particular replacement-of-human-labor trajectory as inevitable. I can't help but think of A.I. as a tool that humans use rather than a replacement for humans, more like a word processor or the internet than a brain. I can't not see a future (and, honestly, a present) in which writers of all kinds use A.I. for parts of the writing process: formatting, idea generation, wordsmithing. Humans prompt the A.I., evaluate its output, edit it, combine it with something they generated, and share an attribution with the A.I. You could call this collaboration or you could call this supervision, depending on how optimistic or pessimistic you are, but the work that it generates is likely better than what A.I. generates on its own and it is generated faster than what humans generate on their own. But humans who prompt, edit, evaluate, and contribute to creating quality work are as necessary as they were before. They can still use that necessity to make their case when bargaining with corporate ownership. 

I also have trouble seeing a marketplace in which all content is generated by A.I. If the A.I. can only generate mediocre content, won't people recognize its mediocrity and prefer human-made creative work? It's hard not to see this particular facet of the argument against A.I. in creative fields as elitist snobbery - "of course the masses will choose the A.I.-generated dreck they're served! We highly-educated members of the creative class must save them by directing them toward 'True Art,' (which we just happened to create and have a financial stake in preserving)."

And that is an ethical argument for A.I. in creative fields that I have yet to hear: the argument that the class of people who are currently paid for being creative are protectionist. If they can just keep us thinking about Big Tech and the obscenely wealthy studio execs, we won't have to think about the vast number of smart, creative, compassionate people who happen to not know how to write well, or to write well in a particular language. I worked hard at becoming a good writer, spending a lot of time and money to acquire this marketable skill. Does that make it morally right to deprive others of the ability to use a writing tool that levels the creative playing field? I assume there are millions of people with the life experience and creativity to be great writers who simply lack the educational experience to craft grammatically correct prose. Who am I to insist they take out loans and wait years before they can make worthy artistic contributions?

I do understand the replacement-of-human-labor argument against A.I. None of the anti-protectionism argument really resolves or even speaks to the market logic argument. I suppose this is what smart regulation does - limit the use of technology in cases where we see clear evidence of social harm but allow it where there are opportunities for social good. As an educator, I want to make sure that students understand how to recognize the characteristics of "good" (i.e., compelling, effective at communicating, lasting the test of time) writing, even if they need a little help getting their subjects and verbs to agree.

It can be hard to see the good of A.I. in creative realms at this stage in the development cycle. It is hard to see the would-be writers and the untold stories, but any ethical approach to the question of A.I. in creative fields must consider them. 

Sunday, August 20, 2023

Types of audience fragmentation

 I'm embarking on a new large-scale project relating to audience fragmentation. Or rather, I have been embarking on it for the past year - such is the leisurely pace of the post-tenure research agenda. It started as a refutation of the echo chamber as an intuitive but overly simplistic characterization of audiences' media diets in the age of information abundance. Then I realized that someone already wrote that book

In researching the idea, I was surprised to find how few studies about fragmenting audiences and echo chambers even tried to capture what I felt was the right kind of data: data capturing the whole of people's media diets - not aggregate audience data, not what individual users post on a particular platform, not even the amount of time or what individuals see on a particular platform, but ALL of what they see across all platforms and media. Unless you capture that, you really have no way of knowing whether individuals have any overlap with one another in what content they consume and/or how many of them are sequestering themselves in ideologically polarized echo chambers. 

In defense of researchers, this is a hard kind of data to get. What media content people consume is often a private matter. It's just hard - for an academic researcher, a company, a government - to get people to trust them enough to get that data. Observing people might cause them to change their behavior. Still, some researchers have made in-roads - working with representative samples, trying to get precise, granular data on precisely what content people are seeing - and I think that if we start to piece together what they have gathered and supplement it with new data, we'll be able to get a better sense of what audience fragmentation actually looks like. 

I've started the process of collecting that data. In a survey, I've asked a sample of college students to post URLs of the last 10 TikTok videos they watched, the last 10 YouTube videos they watched, and the last 10 streaming or TV shows they watched. I'm anticipating that there will be more overlap in the TV data than in the YouTube of TikTok data. But I wonder what counts as meaningful when it comes to overlap or fragmentation. I return to the age-old question: so what?

Let's say you have a group of 100 people. In one scenario, 50 of them watch NFL highlight videos, 25 watch far-right propaganda videos, and 25 watch far-left propaganda videos. In another scenario, all 100 of them watch 100 different videos about knitting. The latter audience, as a whole, is more fragmented than the former audience. The former is more polarized in terms of the content it consumes - half of the sample can be said to occupy echo chambers, either on the right or left. 

It's clear to me while the polarization of media diets matters - it likely drives political violence, instability, etc. But why does fragmentation, in and of itself, matter? 

I guess one fear is that we will no longer have any common experiences, and that will make it harder to feel like we all live in the same society - not as bad as being ideologically polarized, but it's plausible to think that it might lead to a lack of empathy or understanding. But what counts as a common experience? Do we have to have consumed the same media text? Stuart Hall would tell you, in case you didn't already know, that different people watching the same TV episode can process it in different ways, leading to different outcomes. But at least there would be some common ground or experience. 

But what if we watched the same genre of television show, or watched the same type of video (e.g., videos about knitting)? If we contrast the 100 people who all watched different knitting videos to 100 people who all watched videos about 100 very different topics (e.g., knitting, fistfights, European history, coding, basketball highlights, lifestyle porn, etc.), I would think that the former group would have more to talk about - more common ground and experience - than the latter, despite the fact that there is an equal amount of overlap (which is to say, no overlap) in terms of each discrete video they watched. 

Instead of just looking at fragmentation across discrete texts, it would also be useful to look at it across genres or types. It could get tricky determining what qualifies as a meaningful genre or TikTok video. Some TikTok videos share a set of aesthetic conventions but may not convey the same set of values, or vice versa. There will be some similarities across the texts in people's media diets, even if there is no overlap in the discrete texts. The challenge now is to decide what similarities are meaningful