Tuesday, December 19, 2023

All the Media Content We Cannot See

Like the majority of the electromagnetic spectrum, most of any given high-choice media landscape - be it YouTube, TikTok, or even Netflix - is difficult to see without some kind of aid, and thus easy to forget about. One might argue that the most important stuff - the content that has the most influence on individuals and society, i.e., the popular stuff - is easily visible through Top Ten or "trending" lists on the platform itself or through articles, podcasts, and conversations of cultural critics. But how much of the entire spectrum of content - or, if you take a human-centered approach to the question, viewing hours - are we observing when we talk about this tall head of the distribution tail?

The answer has implications for how we conceive of the culture we live in. Often, we assume we can get a pretty good sense of a culture by observing what media content it chooses to spend its time with. The topics, values, and aesthetics of popular content have long been thought to reflect and/or shape the preoccupations of the culture. This was all easy enough during the era of mass media when choice was limited, although even then it oversimplified the character of a culture. We look back on the late 1960's in America and think of psychedelia and unrest, but plenty of folks living in that place and time were likely oblivious to such trends. Still, it seems safe to say that you could get at least some idea of what most people living in a certain place and time were thinking and feeling by examining its popular media content. 

It's a commonplace that the number of choices for media content has exploded in the past decade or two. Truly understanding how content relates to culture - or trying to derive a sense of culture by examining content - has become trickier. In the era of broadcast TV, it wouldn't take much time for anyone to watch episodes of the 10 most popular TV shows. Out of the total number of viewing hours in a given culture, that might get you, say, 50% of them. The other 50% of the viewing hours would be distributed across less popular programming, so you could make a decent claim to "knowing" a culture by examining 10 popular TV shows. What would a similar approach get you now, if applied to Netflix?

According to recently released data from Netflix, viewers watched a total of roughly 90 billion hours in the first half of 2023. Of those hours, the top ten shows accounted for 4.9 billion - or roughly 5% of the total. Watching episodes of these ten shows, then, wouldn't be a very good way to get an idea of what Netflix viewers, generally, were watching (or, by extension, what they thought or how they felt about anything). It may be that the shows are in some way representative of the larger whole - in terms of their genre, topic, tone, aesthetic, values, etc. - but given the relatively small proportion of the whole it represents, there is reason to suspect that we are missing a lot about this group of people and their preoccupations if we only take into account the most popular content. 

But this is where many of us start, and by "us" I mean scholars and researchers as well as cultural critics, content creators seeking to create content that resonates with an audience, or marketers. What other option do we have? 

One alternative would be to take stratified samples from further down the distribution tail, an approach used in this article from The Hollywood Reporter. It's important to note that such an approach requires that the platforms make their data available in such ways as to make this feasible, and in this respect, Netflix has done us a huge favor. It is more difficult to get underneath the trending surface of TikTok or YouTube to try to get even a rough idea of what the rest of it looks like. 

And with YouTube and TikTok, the problem of unaccounted-for content is likely much worse. 

Let's do some back-of-the-envelope* calculations to try see how little of the content universe we're seeing when we examine, say, the top ten TikTok videos from last year. There are roughly 1.1 billion active monthly TikTok users. The average user spends 95 minutes on the app per day. So, that's a total roughly 104.5 billion minutes per day, or 381.5 trillion minutes per year. The most viewed TikTok video of 2023 had 504 million views and it is roughly 30 seconds long. Obviously, the next nine had fewer views than this, but I'm finding it difficult to obtain raw view numbers for each video (it's easy to find the number of followers, but plenty of people watch TikTok videos created by users they don't follow). So, let's err on the side of overestimating and say that each video is 1 minute long and is watched 500 million times. By watching the top ten TikTok videos, we are accounting for 5 billion minutes of viewing. What proportion of the total are we seeing?

Before we do the math, it's worth remembering our tendency to fail to see meaningful differences among very small proportions. We can pretty easily tell the difference between 20% of something and 5% of it but fail to differentiate between .1% and .01%, even though the difference in magnitude of the latter is more than twice the difference in magnitude of the former. Often, we just think of anything below 1% of something as "very small," whether it's .5% or .05%. But if we're really trying to know something - a culture, a media diet, etc. - it's important to correct for that bias and recognize just how small the proportion really is. 

Watching the top 10 TikTok videos of 2023 would account for less than .001% (one thousandth of one percent) of all TikTok viewing. Given that the top 100 videos would have fewer views than the top video, and given that most of those videos are under 1 minute in duration, watching the top 100 videos (a feasible, if time-consuming, task) would account for less than .01% of content viewed on TikTok. 

Even if we are studying a particular topic or domain within these high-choice environments - say, political messages or health-related messages - sampling only the most popular videos doesn't get us anywhere near the complete or representative sample that it once did in the low-choice days of mass media. Most viewing is happening outside of the sample, further down the distribution tail. Until we reckon with the vast size these media environments and the diversity of users' media diets, it's hard to know what we're missing.


*If anyone has more accurate usage data, I would love to see it! I don't have supreme faith in these data, but it's the best I could find right now. 

Wednesday, September 27, 2023

So you want to be an influencer

There's something about the name of the major in the department in which I teach - "Creative Media" - that, for many first-year students, brings to mind the career of an influencer. So as to disabuse them of the notion that our major will teach them how to be an influencer, I outline the differences between the career of a media professional - a broad category encompassing screenwriters, directors, producers, editors, camera-people, newscasters, sound engineers, etc. - and the career of an influencer. In searching for a metaphor or parallel to describe the career of an influencer, I typically refer to pop star musicians (though the following is likely applicable to any genre of popular music - rap, country, etc.). 

On the up side, the barrier to entry is low - anyone can start playing music, post that music online, promote it on social media, develop a following, become famous and earn plenty of money. This is in contrast to many media professional positions that require access to expensive equipment, social and/or geographic proximity to connections in the business, competitive apprenticeships, and a track record of proven success. On the down side, there is a lot more competition when the barrier is low. There's always someone younger, hotter, funnier, edgier, and more novel than you, and they're so eager and hungry for attention that they'll be happy to take that sponsorship deal you turn up your nose at. There's no incentive for platforms like YouTube, TikTok or Spotify to share much revenue with creators because there's a never-ending talent pipeline, and so they tend to pay creators very little.

Generally, pop star careers are shorter than those of many media professionals, again because of the low barrier to entry, their replaceability, and the audience's desire for novelty. Of the small percent of influencers who achieve success, it's hard to find ones who maintain it for more than a few years. This is in contrast to all of the aforementioned media professional careers that typically last decades, with salaries and job security typically increasing over time. 

There's also the challenge of maintaining a pace of output that being an influencer demands. Whereas audiences are trained to expect a new song from a musician maybe once a year, 13 new episodes of a TV show every year, and a new film from a well-known director every several years, influencers are expected to generate new content at least once a month. Maintaining that pace for years can be taxing. Looking at the Wikipedia entries of several popular influencers from the 2010's, the word "hiatus" frequently appears - an understandable response to the non-stop production schedule. This is to say nothing of the effects of public scrutiny on one's mental health, the blurring of personal and professional identities, the loss of privacy - none of which are issues for the average editor, screenwriter, or audio engineer. 

Other influencers try to make the jump to the mainstream, collaborating with established media professionals, making movies or TV shows, parlaying their success on the web into something more lasting. Some succeed while most do not. Gradually, I think influencers and the entertainment industry will get better at intuiting which personalities will transfer to the big screen and which are better suited to TikTok, YouTube, podcasts, etc. 

Another antecedent to the influencer is the career of reality TV star, though they seem to rely more heavily on personal appearance or sponsorship gigs than influencers, who seem to more effectively monetize their content and exert more control over their image from the start. Maybe the similarity is less related to their career trajectory and more to their relationship to audiences - more intimate and ordinary than the average actor or director.

This all sounds like I'm trying to dissuade students from pursuing the life of an influencer, which I'm not. The fact is that tens (or maybe hundreds?) of thousands of influencers (broadly defined) make enough money to live on. I'd guess that this is more than the number of people who make a living at being a pop star, but maybe less than those who make a living as a musician. 

Being an influencer, like being a pop star, seems to require "natural talent." There's only so much you can be taught about how to succeed in those realms, and a college classroom certainly isn't the place to learn it. Better to just watch some tutorial videos, go out there, and do it. And if you got it, you got it, and if you don't, you don't. I can't think of a reason not to pursue both paths - the path of the influencer and the path of the media professional - simultaneously, though the time demands of either path will eventually force you to decide. 

As the influencer phenomenon continues to age, we'll get better at answer these questions about that career: Do influencers get enough revenue coming in from their videos that were popular years before to make a living? Do sponsorship deals persist or do they dry up? What does the second (or third) act of the career of an influencer look like?

Sunday, September 10, 2023

Do people care who (or what) wrote this?

Generative A.I. as a writing tool has limitations. But what I've discovered over the past week is that my perceptions of those limitations can drastically change when I learn about a new way to use it. Before, I'd been giving ChatGPT fairly vague prompts: "Describe the town of Cottondale Alabama as a news article." Listening to a copy.ai prompt engineer on Freakonomics helped me understand that being more specific in your prompts about the length of the output ("500-1000 words") and the audience ("highly-educated audience") makes all the difference. 

The other key lesson is to think of writing with A.I. as an iterative collaboration: ask the program to generate five options, use your gold ol' fashioned human judgment to select the best one, then ask it to further refine or develop that option. If you find it to be boring, ask it to vary the sentence structure or generate five new metaphors for something and then pick the best one. I sensed that writing with generative A.I. could be more like a collaboration with a co-author than an enhanced version of auto-correct; this helped me to see what, exactly, that collaboration looks like, and how to effectively collaborate with the program. 

As the output got better and better, I wondered, "has anyone done a blind test of readers' ability to discern A.I.-assisted writing from purely human writing?" I'd heard of a few misleading journalistic stunts where writers trick readers into thinking that they're reading human writing when, in fact, they are not. But I'm looking for something more rigorous, something that compares readers' abilities to discern that difference across genres of writing: short news articles, poetry, short stories, long-form journalism, short documentary scripts, etc. It seems likely that readers will prefer the A.I.-assisted version in some cases, but it's important to know what types of cases those will be. 

I also wondered what our reactions - as readers and writers - to all of this. I can think of three metaphors for possible reactions to A.I.-assisted writing:

1) the word processor. It's use changed how writers write. It changed the output. Like most disruptive technologies, it was met with skepticism and hostility. But eventually, it was widely adopted. Young writers who hadn't grown up writing free-hand had an easier time adapting to this new way of writing. The technology became "domesticated" - normal to the point of being invisible, embedded in pre-existing structures of economy and society. 

2) machine generated art. Machines have been generating visual art for decades. Some of that art is indiscernible from human generated visual art. Some of it embodies the kinds of aesthetic characteristics that people value. And yet machine generated art has never risen beyond a small niche. The market for visual art largely rejects it, in part because those who enjoy art care about how it is created. Something about the person who created it and the process by which it was created is part of what they value about art. 

3) performance enhancing drugs. Output from A.I.-assisted writing is superior - in some cases far superior - to unaided human writing, and there is market demand for it - the public sets aside its qualms and embraces good writing regardless of how it came about. This situation is perceived by writers, some industries, and some governments as unfair or possibly dangerous, maybe in terms of what bad actors could do with such a tool or how profoundly disruptive its widespread use would be for economies and society. Therefore, they regulate it, discourage its use through public shaming, or, in some cases, explicitly forbid its use. 

The quality of A.I.-assisted writing's output is only part of what will determine its eventual place in our lives. The general public's reaction to it is another part worth paying attention to. 

Friday, August 25, 2023

An ethical case for using A.I. in creative domains

A few months after first considering the promise and threat of A.I. in creative domains, it's still the threats that are getting the most attention. I tend to hear less about the possibility that by allowing A.I. to be used widely (which helps it grow more sophisticated) we are hastening a machine-led apocalypse and more about what we would lose if we replaced human writers with A.I. It would be an obvious loss for the people who write for a living, but they make the case that it would be a loss for society. Creativity would decline, mediocrity would flourish, and we would lose the ineffable sense of humanity that makes great art. By taking the power to create out of the hands of the many writers and putting it in the hands of the few big tech companies, we would exacerbate inequality and consolidate control over culture. 

There are a few steps in this hypothetical process worth scrutinizing. First, this argument assumed that if A.I. is allowed to be used in a creative field (screenwriting, journalism, education), it will necessarily lead to the replacement of human labor. There's a market logic to this: if you owned a company and you could automate a process at a fraction of the cost of paying someone to do it, you would have to automate it. If you didn't, your competition would automate it, be able to produce an equivalent good or experience at a lower cost, charge consumers less for it, be a better value to shareholders as a publicly traded company, and put you out of business. You could point to examples of such things happening in the past as evidence of this logic (though I have to admit, I found it hard to find examples that used human communication rather than physical labor. I'd assumed chatbots had led to steep declines in customer service labor, but all I could find was editorials about how it will lead to steep declines and competing editorials about how customers find chatbots enraging and still demand human customer service agents). 

But I still have trouble thinking of this particular replacement-of-human-labor trajectory as inevitable. I can't help but think of A.I. as a tool that humans use rather than a replacement for humans, more like a word processor or the internet than a brain. I can't not see a future (and, honestly, a present) in which writers of all kinds use A.I. for parts of the writing process: formatting, idea generation, wordsmithing. Humans prompt the A.I., evaluate its output, edit it, combine it with something they generated, and share an attribution with the A.I. You could call this collaboration or you could call this supervision, depending on how optimistic or pessimistic you are, but the work that it generates is likely better than what A.I. generates on its own and it is generated faster than what humans generate on their own. But humans who prompt, edit, evaluate, and contribute to creating quality work are as necessary as they were before. They can still use that necessity to make their case when bargaining with corporate ownership. 

I also have trouble seeing a marketplace in which all content is generated by A.I. If the A.I. can only generate mediocre content, won't people recognize its mediocrity and prefer human-made creative work? It's hard not to see this particular facet of the argument against A.I. in creative fields as elitist snobbery - "of course the masses will choose the A.I.-generated dreck they're served! We highly-educated members of the creative class must save them by directing them toward 'True Art,' (which we just happened to create and have a financial stake in preserving)."

And that is an ethical argument for A.I. in creative fields that I have yet to hear: the argument that the class of people who are currently paid for being creative are protectionist. If they can just keep us thinking about Big Tech and the obscenely wealthy studio execs, we won't have to think about the vast number of smart, creative, compassionate people who happen to not know how to write well, or to write well in a particular language. I worked hard at becoming a good writer, spending a lot of time and money to acquire this marketable skill. Does that make it morally right to deprive others of the ability to use a writing tool that levels the creative playing field? I assume there are millions of people with the life experience and creativity to be great writers who simply lack the educational experience to craft grammatically correct prose. Who am I to insist they take out loans and wait years before they can make worthy artistic contributions?

I do understand the replacement-of-human-labor argument against A.I. None of the anti-protectionism argument really resolves or even speaks to the market logic argument. I suppose this is what smart regulation does - limit the use of technology in cases where we see clear evidence of social harm but allow it where there are opportunities for social good. As an educator, I want to make sure that students understand how to recognize the characteristics of "good" (i.e., compelling, effective at communicating, lasting the test of time) writing, even if they need a little help getting their subjects and verbs to agree.

It can be hard to see the good of A.I. in creative realms at this stage in the development cycle. It is hard to see the would-be writers and the untold stories, but any ethical approach to the question of A.I. in creative fields must consider them. 

Sunday, August 20, 2023

Types of audience fragmentation

 I'm embarking on a new large-scale project relating to audience fragmentation. Or rather, I have been embarking on it for the past year - such is the leisurely pace of the post-tenure research agenda. It started as a refutation of the echo chamber as an intuitive but overly simplistic characterization of audiences' media diets in the age of information abundance. Then I realized that someone already wrote that book

In researching the idea, I was surprised to find how few studies about fragmenting audiences and echo chambers even tried to capture what I felt was the right kind of data: data capturing the whole of people's media diets - not aggregate audience data, not what individual users post on a particular platform, not even the amount of time or what individuals see on a particular platform, but ALL of what they see across all platforms and media. Unless you capture that, you really have no way of knowing whether individuals have any overlap with one another in what content they consume and/or how many of them are sequestering themselves in ideologically polarized echo chambers. 

In defense of researchers, this is a hard kind of data to get. What media content people consume is often a private matter. It's just hard - for an academic researcher, a company, a government - to get people to trust them enough to get that data. Observing people might cause them to change their behavior. Still, some researchers have made in-roads - working with representative samples, trying to get precise, granular data on precisely what content people are seeing - and I think that if we start to piece together what they have gathered and supplement it with new data, we'll be able to get a better sense of what audience fragmentation actually looks like. 

I've started the process of collecting that data. In a survey, I've asked a sample of college students to post URLs of the last 10 TikTok videos they watched, the last 10 YouTube videos they watched, and the last 10 streaming or TV shows they watched. I'm anticipating that there will be more overlap in the TV data than in the YouTube of TikTok data. But I wonder what counts as meaningful when it comes to overlap or fragmentation. I return to the age-old question: so what?

Let's say you have a group of 100 people. In one scenario, 50 of them watch NFL highlight videos, 25 watch far-right propaganda videos, and 25 watch far-left propaganda videos. In another scenario, all 100 of them watch 100 different videos about knitting. The latter audience, as a whole, is more fragmented than the former audience. The former is more polarized in terms of the content it consumes - half of the sample can be said to occupy echo chambers, either on the right or left. 

It's clear to me while the polarization of media diets matters - it likely drives political violence, instability, etc. But why does fragmentation, in and of itself, matter? 

I guess one fear is that we will no longer have any common experiences, and that will make it harder to feel like we all live in the same society - not as bad as being ideologically polarized, but it's plausible to think that it might lead to a lack of empathy or understanding. But what counts as a common experience? Do we have to have consumed the same media text? Stuart Hall would tell you, in case you didn't already know, that different people watching the same TV episode can process it in different ways, leading to different outcomes. But at least there would be some common ground or experience. 

But what if we watched the same genre of television show, or watched the same type of video (e.g., videos about knitting)? If we contrast the 100 people who all watched different knitting videos to 100 people who all watched videos about 100 very different topics (e.g., knitting, fistfights, European history, coding, basketball highlights, lifestyle porn, etc.), I would think that the former group would have more to talk about - more common ground and experience - than the latter, despite the fact that there is an equal amount of overlap (which is to say, no overlap) in terms of each discrete video they watched. 

Instead of just looking at fragmentation across discrete texts, it would also be useful to look at it across genres or types. It could get tricky determining what qualifies as a meaningful genre or TikTok video. Some TikTok videos share a set of aesthetic conventions but may not convey the same set of values, or vice versa. There will be some similarities across the texts in people's media diets, even if there is no overlap in the discrete texts. The challenge now is to decide what similarities are meaningful

Wednesday, July 12, 2023

Micro-blogging, Take 2

As a social media platform, you know you've achieved success when others start cloning you. It's easy to call to mind the successful copycat platforms that, in several cases, far exceeded their predecessors:  Facebook (MySpace, Friendster), Reddit (Digg). It's a bit harder to recall the many clones that never make it (Voat, Google+, Orkut), typically because the network effects that are intrinsic to platforms' success put those with small userbases at a distinct disadvantage or because they lack the infrastructure and/or revenue to support a rapidly growing userbase. In other words, there typically aren't enough people to make the place interesting or valuable, or there are too many people to keep running/moderating the platform for free. 

But Meta/Facebook/Instagram's introduction of Threads is different in this regard, giving us a chance to see what a clone could do if it didn't have to worry about those two problems. Threads has already successfully ported 100 million users from Instagram, maintaining the network structure among interest/affinity groups and connections between established influencers and their audiences. It also has Meta's massive infrastructure at its disposal - growth won't be a problem. And so we have a rare opportunity to see if this version of a micro-blogging platform - already operating at a scale similar to the existing leader, Twitter - will be all that different than what came before. 

Mark Zuckerberg has pitched Threads as a friendlier version of Twitter. Broad generalizations about the emotional valence of any social space are inherently oversimplifying - you can find pockets of friendliness and hostility among virtually any large group of people, online or offline. Still, it's entirely possible that one space could have the tendency to be friendlier than another - that's an empirically testable claim (provided you can agree on how to measure "friendliness"). 

Before trying to determine whether Threads has or is likely to achieve this goal (or whether such a goal is desirable, or if friendliness and ideological heterogeneity are mutually exclusive), it's worth considering how it might go about achieving it. Most obviously, more content moderation might tamp down overt hostility. Less obviously, there are facets of the platform that affect linkages among users - which users' posts are visible to other users. 

By importing lists of followers and popular accounts from Instagram, Threads imported a set of cultural norms, one that evolved over the last decade and privileged attractive or attention-getting still images over words, audio, or video. Broadly speaking, there's a kind of showy-ness to Instagram, a content ranking system that rewards positivity (some would argue to toxic levels). Then there's the sociotechnical context in which Threads is being deployed - as a kind of antidote to Twitter's perceived problem with negativity, conflict, and abuse. If Twitter wasn't an especially friendly place before Elon Musk took it over, it is much less so now. This might create demand for such a place, which Threads is well positioned to serve.

Then there's that pesky algorithm - the necessarily obscure formula that controls which posts appear at the top of your feed. Despite widespread skepticism toward algorithms, its hard to imagine a popular social media platform without one, particular one that aspires not to link small groups of people together (e.g., Facebook, Discord, GroupMe, and the way some people use Snapchat) but to give everyone the chance - however remote - to command an audience. Imagine ranking YouTube or TikTok videos chronologically, or at random. Some weighted combination or popularity, engagement (e.g., number of comments or shares), and predicted affinity (amount of time you've spent on similar posts) is the best way to keep users coming back for more. 

One way to go about ranking posts is to defer to the masses - showcase whatever is broadly popular, as Twitter does with its prominently displayed list of trending topics. Another way is to tailor it to each user's preferences - the niche approach favored by TikTok. The first kind of ranking creates a "main character of the day" a target for attention and ridicule on and beyond the platform. The second kind, supposedly, creates echo chambers (though evidence is mounting that, as intuitive as this understanding of personalized ranking is, it doesn't fit what most users actually see on social media). Inheriting its structure from Instagram, Threads seems to privilege, as Kyle Chayka put it, banal celebrities and self-branding. As masspersonal media where any user can potentially reach millions of other users, Threads cannot help but encourage a kind of performativity over connecting with a small group. 

Then there's the shift from images and short video to text. The whole reason Threads is being talked about as a Twitter clone is because its primarily intended to be used for mass conversation. In their branching/nested structure (you can reply to a reply to a post, with each reply "nested" under the previous message), conversations on Threads resemble conversations on Reddit, and it will be interesting to see if future designs of Threads nudge users to engage more in the replies.

But I wonder about the brevity of text and what that does to conversations. The whole point of Twitter - what put the "micro" in "micro-blogging" - was the character limit (originally 140, upped to 280). It's well-suited to a fragmented attention universe, but I wonder if the tone of Twitter (witty, sure, but also mean) is an inevitable symptom of its mandatory brevity. Is there something about short-form writing that is bound to regress toward snark? Is that simply the nature of the medium, regardless of the combination of people and level of moderation? That's what Threads might give us a chance to observe.

Sunday, June 18, 2023

When subreddits go dark

Among the many unforeseen effects of ChatGPT's release, there is a change in policy at Reddit that has caused a significant disruption among its community moderators. Reddit has served as a valuable and, to date, free source of training data for ChatGPT and other large language model (LLM) AI - billions of utterances from hundreds of millions of people about thousands of topics over a 15 year span. These LLM AIs are already worth billions of dollars, more than Reddit was ever worth during its first 15 years. It is therefore understandable that Reddit as a company wants to stop the practice of giving its back catalog of data away for free. They're not the only ones keen to point out that the training data used by LLMs, while ostensibly free to access, were created and facilitated by others who, it could be argued, were indispensable in the creation of now-popular AI programs like ChatGPT.

This isn't the only reason why Reddit would want to turn off the spigot of free access to its vast archives of posts and comments via an API. An ecosystem of third party apps has flourished under this policy, resulting in the loss of untold hours of user attention to ads on Reddit's official app, and thus lost revenue. Many users have become accustomed to accessing Reddit this way, and are understandably upset at having to migrate to the official Reddit app, widely regarded as inferior to the third party apps. 

Then there's the issue of how subreddit moderators use the API to more effectively moderate their communities. They can use the API to quickly assess a user's posting or comment history to see if a disruptive comment or post is part of a larger pattern and thus worth banning the user (this includes spambots and trolls that, unmoderated, could overwhelm a subreddit with useless or disruptive content). They use them to determine when a question asked by a user has already been answered in the past, and to highlight that answer. APIs are also relied upon by some users with disabilities, as a way of access the platform. This post from the moderators of r/AskHistorians has a good summary of other ways in which mods rely on the API. 

Reddit as a company seems interested in addressing the disability accessibility issue, carving out an exception for third party apps specifically designed for users with disabilities. Beyond that, they don't seem particularly interested in walking back their decision to charge for access to the API...yet. 

The conflict between the administrators (paid employees of the company) and moderators (unpaid volunteers who manage Reddit's tens of thousands of active communities) is a familiar one - management vs. labor. As with any such conflict, labor's ability to get what they want assumes that they can't easily be replaced by more willing labor - be it human or automated. Can the company still produce something of value to the consumer without the willing participation of current labor? 

In most cases, replacement labor produces something different than what was produced by the original labor, and in most cases, its (at least initially) regarded as inferior (labor certainly has an interest in highlighting its inferiority). But the question of fungibility of labor, from management's standpoint, in the world of social media is a tricky one. The culture and communities that live on popular social media platforms - the things that makes them valuable - are constantly shifting. As in most cultures and communities in the physical world, users frequently lament these shifts, blaming new entrants to the community or powerful authorities. If they hate the changes enough, they leave. 

So, when moderators of a popular subreddit choose to go on strike, effectively killing that subreddit, what happens to its users? 

One possibility is that they leave the platform. This is, I would think, what the mods are trying to accomplish - driving traffic off the platform, hurting Reddit's bottom line, and getting management some bad press. Another possibility is that the users of that subreddit migrate to other existing subreddits - ones with moderators that didn't strike - and find that these other subreddits are roughly as good at satisfying their need for distraction, information, community, amusement, comfort, etc., resulting in a surge of activity on those subreddits. Yet another possibility is that new subreddits arise to meet the demand created by the absence of striking subreddits. "Splinter" subreddits - subreddits that are created to cater to disgruntled "refugees" from a subreddit that has changed in some disagreeable way - have always been a part of the subreddit ecosystem. Modularity is a defining feature of Reddit, something that sets it apart from the single, amorphous conversation on Twitter and the atomized, fleeting comments on TikTok, YouTube, and Instagram. In this case, it makes it harder for labor to force management to do anything. Unless a critical degree of solidarity among moderators is reached and sustained (a tall order, given the number and diversity of subreddits), its hard to prevent within-platform user migration. 

Reddit's design - a feed or list of posts aggregated from subreddits to which users subscribe - can make it hard to even notice the absence of a striking subreddit. When I went to the site, it took me awhile to realize what was missing; I was still seeing pictures of cute animals, funny things that people said or did, captivating vistas and infuriating news. Of course, the impact is going to vary from user to user. For some users, access to a particular subreddit can be as valuable as access to a close friend, one whose absence would result in a sense of profound loss. It's hard to see how many users are like that - management has an interest in making it seem as though they are a small, vocal minority while labor has an interest in making it seem as though most users are upset by the changes - so upset that they are already in the process of leaving.

It's possible that a large number of currently-popular subreddits die as a result of this disruption, and that a large number of users leave the platform and don't come back. It's easy to point to moribund platforms like Digg or MySpace that never found a replacement community that could sustain the business. But that doesn't take into account the current business climate for these businesses, news organizations, and even video and audio streaming services that seem (with varying degrees of success) to be training consumers to pay - in subscription fees or attention to ads - for what they consume. If the era of free high-quality user-generated-content is over, there may be no substitutable platform for disgruntled users to migrate to. 

It also may underestimate how organic and unpredictable large groups of people are. We get used to versions of these platforms - used to seeing a certain type of post or comment at the top of the feed, the popularity of which reflects the collective sensibilities of a voting constituency of users. That sensibility persists even in the face of high turnover among contributors - it is generated not by a stable group of super-users but by a rotating cast who cater to the preferences of the constituency. Disruptions like the current one can change the voting constituencies and thus change what we see at the top of our feeds. Perhaps mods' lack of free access to APIs will make it effectively impossible to manage subreddits beyond a certain size (say, 1,000 active contributors), leading to a Reddit with no large communities and more moderately sized communities that, when they grow unmanageably large, spawn offshoots - something a bit more like Discord. That version of Reddit might be far more diverse in its interests and sensibilities, and ultimately more successful, than the version we're now accustomed to.