Sunday, September 10, 2023

Do people care who (or what) wrote this?

Generative A.I. as a writing tool has limitations. But what I've discovered over the past week is that my perceptions of those limitations can drastically change when I learn about a new way to use it. Before, I'd been giving ChatGPT fairly vague prompts: "Describe the town of Cottondale Alabama as a news article." Listening to a copy.ai prompt engineer on Freakonomics helped me understand that being more specific in your prompts about the length of the output ("500-1000 words") and the audience ("highly-educated audience") makes all the difference. 

The other key lesson is to think of writing with A.I. as an iterative collaboration: ask the program to generate five options, use your gold ol' fashioned human judgment to select the best one, then ask it to further refine or develop that option. If you find it to be boring, ask it to vary the sentence structure or generate five new metaphors for something and then pick the best one. I sensed that writing with generative A.I. could be more like a collaboration with a co-author than an enhanced version of auto-correct; this helped me to see what, exactly, that collaboration looks like, and how to effectively collaborate with the program. 

As the output got better and better, I wondered, "has anyone done a blind test of readers' ability to discern A.I.-assisted writing from purely human writing?" I'd heard of a few misleading journalistic stunts where writers trick readers into thinking that they're reading human writing when, in fact, they are not. But I'm looking for something more rigorous, something that compares readers' abilities to discern that difference across genres of writing: short news articles, poetry, short stories, long-form journalism, short documentary scripts, etc. It seems likely that readers will prefer the A.I.-assisted version in some cases, but it's important to know what types of cases those will be. 

I also wondered what our reactions - as readers and writers - to all of this. I can think of three metaphors for possible reactions to A.I.-assisted writing:

1) the word processor. It's use changed how writers write. It changed the output. Like most disruptive technologies, it was met with skepticism and hostility. But eventually, it was widely adopted. Young writers who hadn't grown up writing free-hand had an easier time adapting to this new way of writing. The technology became "domesticated" - normal to the point of being invisible, embedded in pre-existing structures of economy and society. 

2) machine generated art. Machines have been generating visual art for decades. Some of that art is indiscernible from human generated visual art. Some of it embodies the kinds of aesthetic characteristics that people value. And yet machine generated art has never risen beyond a small niche. The market for visual art largely rejects it, in part because those who enjoy art care about how it is created. Something about the person who created it and the process by which it was created is part of what they value about art. 

3) performance enhancing drugs. Output from A.I.-assisted writing is superior - in some cases far superior - to unaided human writing, and there is market demand for it - the public sets aside its qualms and embraces good writing regardless of how it came about. This situation is perceived by writers, some industries, and some governments as unfair or possibly dangerous, maybe in terms of what bad actors could do with such a tool or how profoundly disruptive its widespread use would be for economies and society. Therefore, they regulate it, discourage its use through public shaming, or, in some cases, explicitly forbid its use. 

The quality of A.I.-assisted writing's output is only part of what will determine its eventual place in our lives. The general public's reaction to it is another part worth paying attention to. 

No comments: