A few months after first considering the promise and threat of A.I. in creative domains, it's still the threats that are getting the most attention. I tend to hear less about the possibility that by allowing A.I. to be used widely (which helps it grow more sophisticated) we are hastening a machine-led apocalypse and more about what we would lose if we replaced human writers with A.I. It would be an obvious loss for the people who write for a living, but they make the case that it would be a loss for society. Creativity would decline, mediocrity would flourish, and we would lose the ineffable sense of humanity that makes great art. By taking the power to create out of the hands of the many writers and putting it in the hands of the few big tech companies, we would exacerbate inequality and consolidate control over culture.
There are a few steps in this hypothetical process worth scrutinizing. First, this argument assumed that if A.I. is allowed to be used in a creative field (screenwriting, journalism, education), it will necessarily lead to the replacement of human labor. There's a market logic to this: if you owned a company and you could automate a process at a fraction of the cost of paying someone to do it, you would have to automate it. If you didn't, your competition would automate it, be able to produce an equivalent good or experience at a lower cost, charge consumers less for it, be a better value to shareholders as a publicly traded company, and put you out of business. You could point to examples of such things happening in the past as evidence of this logic (though I have to admit, I found it hard to find examples that used human communication rather than physical labor. I'd assumed chatbots had led to steep declines in customer service labor, but all I could find was editorials about how it will lead to steep declines and competing editorials about how customers find chatbots enraging and still demand human customer service agents).
But I still have trouble thinking of this particular replacement-of-human-labor trajectory as inevitable. I can't help but think of A.I. as a tool that humans use rather than a replacement for humans, more like a word processor or the internet than a brain. I can't not see a future (and, honestly, a present) in which writers of all kinds use A.I. for parts of the writing process: formatting, idea generation, wordsmithing. Humans prompt the A.I., evaluate its output, edit it, combine it with something they generated, and share an attribution with the A.I. You could call this collaboration or you could call this supervision, depending on how optimistic or pessimistic you are, but the work that it generates is likely better than what A.I. generates on its own and it is generated faster than what humans generate on their own. But humans who prompt, edit, evaluate, and contribute to creating quality work are as necessary as they were before. They can still use that necessity to make their case when bargaining with corporate ownership.
I also have trouble seeing a marketplace in which all content is generated by A.I. If the A.I. can only generate mediocre content, won't people recognize its mediocrity and prefer human-made creative work? It's hard not to see this particular facet of the argument against A.I. in creative fields as elitist snobbery - "of course the masses will choose the A.I.-generated dreck they're served! We highly-educated members of the creative class must save them by directing them toward 'True Art,' (which we just happened to create and have a financial stake in preserving)."
And that is an ethical argument for A.I. in creative fields that I have yet to hear: the argument that the class of people who are currently paid for being creative are protectionist. If they can just keep us thinking about Big Tech and the obscenely wealthy studio execs, we won't have to think about the vast number of smart, creative, compassionate people who happen to not know how to write well, or to write well in a particular language. I worked hard at becoming a good writer, spending a lot of time and money to acquire this marketable skill. Does that make it morally right to deprive others of the ability to use a writing tool that levels the creative playing field? I assume there are millions of people with the life experience and creativity to be great writers who simply lack the educational experience to craft grammatically correct prose. Who am I to insist they take out loans and wait years before they can make worthy artistic contributions?
I do understand the replacement-of-human-labor argument against A.I. None of the anti-protectionism argument really resolves or even speaks to the market logic argument. I suppose this is what smart regulation does - limit the use of technology in cases where we see clear evidence of social harm but allow it where there are opportunities for social good. As an educator, I want to make sure that students understand how to recognize the characteristics of "good" (i.e., compelling, effective at communicating, lasting the test of time) writing, even if they need a little help getting their subjects and verbs to agree.
It can be hard to see the good of A.I. in creative realms at this stage in the development cycle. It is hard to see the would-be writers and the untold stories, but any ethical approach to the question of A.I. in creative fields must consider them.
No comments:
Post a Comment