Sunday, August 25, 2024

What do we mean by "Influencer?"

One of the most fruitful panels I attended at this year's International Communication Association conference concerned the definition of "influencer." The term has always rubbed me the wrong way, always sounded like self-serving marketer-speak that misleads us about how influential anyone is online. As Crystal Abidin of Curtin University pointed out during the panel, the term is deployed strategically. If calling yourself an influencer makes you money and avoids the kind of regulation to which professional media producers are subject, then that's what you'll call yourself. 

What are the alternatives? "Content producer" is certainly less aggrandizing, but it also makes no claim with regard to reach, engagement, or any other metric of success. Even if only a few people watch my videos, I'm still a content producer. The same could be said of platform-specific or media-specific monikers like YouTuber, TikTokker, podcaster, streamer, or vlogger/blogger. There needed to be some term that differentiated online content producers with a certain level of success - however one wanted to define and measure success - from those without it. 

For a time, that term was "micro-celebrity," a name as diminishing (who wants to be thought of as a "micro" anything?) as "influencer" is aggrandizing. Aside from their opposite associations, differences in name matter for scholars, journalists, or interested members of the public seeking out research on this phenomenon. Search the databases for articles, chapters, and books on "influencers" and you'll be missing some of the most important work on the topic, like Alice Marwick's work on successful YouTubers - highly applicable to the successful TikTokkers we now refer to as "influencers." So, that was one takeaway: if you're interested in influencers, start with the scholarly work done on micro-celebrity. 

Number of followers = Influencer?

The European Commission defines Influencers as "content creators who often advertise or sell products on a regular basis." While this jibes with most people's conception of Influencers, it doesn't seem inclusive enough. It rules out anyone with a large audience who makes money through Patreon or pre-roll ads (which, I would think, is the platform advertising rather than the content creator, but the creator still gets a cut of the ad revenue) instead of overt paid promotion. Such a user might be highly influential - raising awareness of a cause or an approach to investing or a political candidate - but would not meet the strict legal definition set forth by the European Commission.

Most of the research on Influencers, both qualitative and quantitative, points to some metric - usually the user's number of followers - as evidence that they are, in fact, an influencer. As best I can tell, the cut-off for Influencer status is arbitrary and/or based on round numbers chosen by marketing professionals. The bare minimum seems to be 1,000 followers, qualifying for the lowest tier of influencer - the "nano-influencer."

In addition to metrics, there seems to be an aesthetic component to the accepted definition of Influencer. Accounts referred to as Influencers tend to feature individuals or, less frequently, duos or families, and they typically appear in all or most of their videos, typically directly addressing the camera/audience. A popular (and, perhaps, highly influential) account that compiled thematically related clips would not fit this aesthetic description nor would an account run by a group of comedians. And yet sometimes, such accounts are referred to as influencers. 

Are Influencers Actually Influential?

Why does it matter whom we call influencers? What are we assuming about them when we take them as a starting point to studying or reporting on some social phenomenon?

All members of the ICA panel seemed skeptical that everyone, or even most, influencers are especially influential, and skeptical of the direct relationship - often assumed by journalists, users, and even researchers - between follower count and influence. Marketers would tell you that "reach" is an important aspect of influence, but not the only aspect. To be influential, it helps to be well liked. And sure, the act of following an account implies affinity, but how many of those who follow an account see every post, "like" every post, and have their beliefs, attitudes, or behavior influenced by every post? A subset of followers actually see any given post, and of that, a smaller subset might be persuadable, depending on their mood and the proximity between their current beliefs and attitudes and those of the influencer.

Most press articles and more than a few academic articles and books about influencers don't make the distinction between reach and influence. At best, these models of influence lack nuance but are still fundamentally sound. All things being equal, a content creator with 1,000,000 followers is probably more influential than one with 10,000 followers. This kind of relative judgment of influence is a logic that underpins the entire advertising industry; an industry, it should be noted, that gave rise to the term "influencer." How effective is a given ad at persuading people to purchase a product? Surprisingly, it's still difficult to tell

Part of the reason it's difficult to tell is that companies creating and selling ads don't have much incentive to prove the magnitude of their efficacy. All they have to prove to companies seeking to promote their goods and services through advertising is the relative advantage in the marketplace compared to their competition. As long as they are more visible to potential consumers than the competition, they're more likely to sell more products or services than the competition. 

But for those of us who care less about selling one particular set of products or services and care more about influence in general (who to vote for, whether or not to get vaccinated, changing one's beliefs about capitalism or gender equality, etc.), the magnitude of influences matter. Simply equating exposure with influence puts us back at the hypodermic needle model of media effects - everyone who is exposed to a media message reacts to it the same way. That theory was debunked (or at least modified) over 60 years ago by the Limited Media Effects paradigm.

Some might argue that its different this time around. Influencers are more influential than standard mediated promotion because audiences/followers feel as though they have a relationship with the influencer (i.e. a parasocial relationship), and because influencers are perceived to be more authentic than celebrities. At times, they actually interact with audience members through comments or replies. We know that one's peers can have an influence many times greater than that of impersonal mass promotion campaigns, and the relationship between influencers and their followers is thought to be like those of peers. For the relatively few followers who repeatedly comment and receive replies from influencers (i.e., interact with them repeatedly), this seems plausible. For the majority (90-95% of followers, I'd guess) who don't, it seems more like the relationship between a talk show host and their audience - more personal (and thus more influential) than the relationship between a fictional character and an audience or mass advertising and an audience, but less than an actual close friend. Might influencers be more influential than impersonal TV advertising? Sure, but that's a pretty low bar. 

It seems most likely to me that influencers are highly influential under certain conditions. They're probably good at directing attention to people, places, or things that have yet to attract much attention. They can take an unknown product, social cause, political candidate, or location and make it enormously popular (or enormously hated) overnight. It strikes me as far less likely that influencers can get people to change their minds about a person, place, or thing once their audience already know about it and have formed an opinion about it. This is nothing new: that's how persuasion works - easy when people don't have an awareness or opinion of something or someone, difficult when they already do. Any marketer, campaign manager, or media effects researcher from 60 years ago could have told you that. So, a more accurate moniker for influencers might be "attention directors."

The Real Influencers

Consider this alternative: perhaps the people who "like," share, repeatedly view, or discuss content online have the greatest amount of influence in our current mediascape. This largely anonymous, highly engaged group is far smaller than the general public, and in most cases smaller than the audience, which includes casual, less engaged people. The influence of the highly engaged group over the influencer is subtle, but worth thinking about. 

It might help to take the perspective of someone who sets out to be an influencer. They have something they want to say, some "self" they want to express. They express it...and get very little attention from audiences. Disappointed, they take a quick scan of the most popular influencers overall and in their particular domain (say, gaming, sports, or political commentary). They get a sense of the things those people say or do that make them popular - aesthetic choices like editing pace and clarity of message, but also ideological choices, embodiments of personality, tone, language, sense of humor, etc. They begin to adopt some of these, reluctant to wander too far away from their original expression of self, both because it makes them uncomfortable and because they worry about being perceived as inauthentic. They get a bit of positive feedback - more likes, more comments, more attention - and so they keep doing it. 

The collective influence of the highly engaged audience over the influencer might be even subtler than that. Maybe the influencer watched thousands of videos while growing up, simply seeking what most audiences seek - entertainment. Through that process, they absorbed online norms, gradually developing an intuition about what differentiates popular content on a given topic and within a given online subculture from less popular content within that context. At some point, they decide to try their hand at creating content, but what it occurs to them to create - the range of expressive possibilities - is inevitably shaped by what they've already watched. They're not consciously trying to mimic the existing content that effectively caters to audiences' preferences and values, but are apt to cater to them nonetheless. That influencer becomes popular - they have hundreds of thousands of subscribers and millions of views. But what they say and how they say it must conform to the preferences and values of the highly engaged audience, or else it wouldn't become popular in the first place. 

Again, this is nothing new. Novelists, TV writers, and screenwriters know that in order to reach a large enough audience to make for a sustainable career in a capitalist marketplace, you need to - in some sense - cater to a highly influential subset of that audience (critics, opinion leaders, execs). It's crass to think that way - most creators want to see their work as pure acts of self-expression and creativity uninfluenced by the marketplace. And this is not to say that there isn't any original creativity on the part of the artist or content creator. In fact, all audiences demand it: if you simply serve them up something that already existed, they wouldn't bite. There has to be some element of novelty to it, some spark of originality, individuality, and authenticity...but it also needs to fit within a set of generic conventions, and those conventions are articulated through the habits of audiences or some highly engaged subset of the audiences. In the box office business model of media, one person's dollar is as good as another's. In the ad-supported model, certain demographics are more valued than others. And in the online attention economy, the highly engaged audience determines the downstream visibility of content, and is thus more valuable and more influential.

Reasons to be Skeptical

A lot of people want to believe that influencers (or, more broadly, any content on TikTok, Instagram, YouTube, or X/Twitter) are influential. The content creators themselves (obviously) want to believe that they have influence, and so do many researchers and journalists. It doesn't have to be so, but I think there is a bias among many who study and write about influencers to find evidence of significant influence, to protect their work against accusations of triviality, something that scholars of popular culture have dealt with for decades. My sense is that young researchers are drawn to study social media because they believe that it matters, that it is influential, so there's a bit of self-selection bias - they enter the arena looking for evidence of influence. Social media platforms want to highlight such evidence because it makes their companies more valuable, but also (somewhat less cynically) more important; their work matters. Governments, parents, and pretty much everyone else stand to benefit by ascribing blame for every social ill on social media and, by extension, influencers. There is little downside (at least for the blamers) to blaming greedy billionaires, narcissistic influencers, and opaque algorithms for social ills. Its certainly easier than fixing other long-entrenched causes of systemic inequities or one's own personal issues, or simply accepting that humans were never designed to optimize social harmony or happiness. 

That's not to say we shouldn't strive for greater social harmony and happiness; only that we should avoid seeking comfort in scapegoats. How do we know when influencers, social media platforms, or algorithms are merely scapegoats and not genuine threats? This is the hard work of good media research.


Tuesday, December 19, 2023

All the Media Content We Cannot See

Like the majority of the electromagnetic spectrum, most of any given high-choice media landscape - be it YouTube, TikTok, or even Netflix - is difficult to see without some kind of aid, and thus easy to forget about. One might argue that the most important stuff - the content that has the most influence on individuals and society, i.e., the popular stuff - is easily visible through Top Ten or "trending" lists on the platform itself or through articles, podcasts, and conversations of cultural critics. But how much of the entire spectrum of content - or, if you take a human-centered approach to the question, viewing hours - are we observing when we talk about this tall head of the distribution tail?

The answer has implications for how we conceive of the culture we live in. Often, we assume we can get a pretty good sense of a culture by observing what media content it chooses to spend its time with. The topics, values, and aesthetics of popular content have long been thought to reflect and/or shape the preoccupations of the culture. This was all easy enough during the era of mass media when choice was limited, although even then it oversimplified the character of a culture. We look back on the late 1960's in America and think of psychedelia and unrest, but plenty of folks living in that place and time were likely oblivious to such trends. Still, it seems safe to say that you could get at least some idea of what most people living in a certain place and time were thinking and feeling by examining its popular media content. 

It's a commonplace that the number of choices for media content has exploded in the past decade or two. Truly understanding how content relates to culture - or trying to derive a sense of culture by examining content - has become trickier. In the era of broadcast TV, it wouldn't take much time for anyone to watch episodes of the 10 most popular TV shows. Out of the total number of viewing hours in a given culture, that might get you, say, 50% of them. The other 50% of the viewing hours would be distributed across less popular programming, so you could make a decent claim to "knowing" a culture by examining 10 popular TV shows. What would a similar approach get you now, if applied to Netflix?

According to recently released data from Netflix, viewers watched a total of roughly 90 billion hours in the first half of 2023. Of those hours, the top ten shows accounted for 4.9 billion - or roughly 5% of the total. Watching episodes of these ten shows, then, wouldn't be a very good way to get an idea of what Netflix viewers, generally, were watching (or, by extension, what they thought or how they felt about anything). It may be that the shows are in some way representative of the larger whole - in terms of their genre, topic, tone, aesthetic, values, etc. - but given the relatively small proportion of the whole it represents, there is reason to suspect that we are missing a lot about this group of people and their preoccupations if we only take into account the most popular content. 

But this is where many of us start, and by "us" I mean scholars and researchers as well as cultural critics, content creators seeking to create content that resonates with an audience, or marketers. What other option do we have? 

One alternative would be to take stratified samples from further down the distribution tail, an approach used in this article from The Hollywood Reporter. It's important to note that such an approach requires that the platforms make their data available in such ways as to make this feasible, and in this respect, Netflix has done us a huge favor. It is more difficult to get underneath the trending surface of TikTok or YouTube to try to get even a rough idea of what the rest of it looks like. 

And with YouTube and TikTok, the problem of unaccounted-for content is likely much worse. 

Let's do some back-of-the-envelope* calculations to try see how little of the content universe we're seeing when we examine, say, the top ten TikTok videos from last year. There are roughly 1.1 billion active monthly TikTok users. The average user spends 95 minutes on the app per day. So, that's a total roughly 104.5 billion minutes per day, or 381.5 trillion minutes per year. The most viewed TikTok video of 2023 had 504 million views and it is roughly 30 seconds long. Obviously, the next nine had fewer views than this, but I'm finding it difficult to obtain raw view numbers for each video (it's easy to find the number of followers, but plenty of people watch TikTok videos created by users they don't follow). So, let's err on the side of overestimating and say that each video is 1 minute long and is watched 500 million times. By watching the top ten TikTok videos, we are accounting for 5 billion minutes of viewing. What proportion of the total are we seeing?

Before we do the math, it's worth remembering our tendency to fail to see meaningful differences among very small proportions. We can pretty easily tell the difference between 20% of something and 5% of it but fail to differentiate between .1% and .01%, even though the difference in magnitude of the latter is more than twice the difference in magnitude of the former. Often, we just think of anything below 1% of something as "very small," whether it's .5% or .05%. But if we're really trying to know something - a culture, a media diet, etc. - it's important to correct for that bias and recognize just how small the proportion really is. 

Watching the top 10 TikTok videos of 2023 would account for less than .001% (one thousandth of one percent) of all TikTok viewing. Given that the top 100 videos would have fewer views than the top video, and given that most of those videos are under 1 minute in duration, watching the top 100 videos (a feasible, if time-consuming, task) would account for less than .01% of content viewed on TikTok. 

Even if we are studying a particular topic or domain within these high-choice environments - say, political messages or health-related messages - sampling only the most popular videos doesn't get us anywhere near the complete or representative sample that it once did in the low-choice days of mass media. Most viewing is happening outside of the sample, further down the distribution tail. Until we reckon with the vast size these media environments and the diversity of users' media diets, it's hard to know what we're missing.


*If anyone has more accurate usage data, I would love to see it! I don't have supreme faith in these data, but it's the best I could find right now. 

Wednesday, September 27, 2023

So you want to be an influencer

There's something about the name of the major in the department in which I teach - "Creative Media" - that, for many first-year students, brings to mind the career of an influencer. So as to disabuse them of the notion that our major will teach them how to be an influencer, I outline the differences between the career of a media professional - a broad category encompassing screenwriters, directors, producers, editors, camera-people, newscasters, sound engineers, etc. - and the career of an influencer. In searching for a metaphor or parallel to describe the career of an influencer, I typically refer to pop star musicians (though the following is likely applicable to any genre of popular music - rap, country, etc.). 

On the up side, the barrier to entry is low - anyone can start playing music, post that music online, promote it on social media, develop a following, become famous and earn plenty of money. This is in contrast to many media professional positions that require access to expensive equipment, social and/or geographic proximity to connections in the business, competitive apprenticeships, and a track record of proven success. On the down side, there is a lot more competition when the barrier is low. There's always someone younger, hotter, funnier, edgier, and more novel than you, and they're so eager and hungry for attention that they'll be happy to take that sponsorship deal you turn up your nose at. There's no incentive for platforms like YouTube, TikTok or Spotify to share much revenue with creators because there's a never-ending talent pipeline, and so they tend to pay creators very little.

Generally, pop star careers are shorter than those of many media professionals, again because of the low barrier to entry, their replaceability, and the audience's desire for novelty. Of the small percent of influencers who achieve success, it's hard to find ones who maintain it for more than a few years. This is in contrast to all of the aforementioned media professional careers that typically last decades, with salaries and job security typically increasing over time. 

There's also the challenge of maintaining a pace of output that being an influencer demands. Whereas audiences are trained to expect a new song from a musician maybe once a year, 13 new episodes of a TV show every year, and a new film from a well-known director every several years, influencers are expected to generate new content at least once a month. Maintaining that pace for years can be taxing. Looking at the Wikipedia entries of several popular influencers from the 2010's, the word "hiatus" frequently appears - an understandable response to the non-stop production schedule. This is to say nothing of the effects of public scrutiny on one's mental health, the blurring of personal and professional identities, the loss of privacy - none of which are issues for the average editor, screenwriter, or audio engineer. 

Other influencers try to make the jump to the mainstream, collaborating with established media professionals, making movies or TV shows, parlaying their success on the web into something more lasting. Some succeed while most do not. Gradually, I think influencers and the entertainment industry will get better at intuiting which personalities will transfer to the big screen and which are better suited to TikTok, YouTube, podcasts, etc. 

Another antecedent to the influencer is the career of reality TV star, though they seem to rely more heavily on personal appearance or sponsorship gigs than influencers, who seem to more effectively monetize their content and exert more control over their image from the start. Maybe the similarity is less related to their career trajectory and more to their relationship to audiences - more intimate and ordinary than the average actor or director.

This all sounds like I'm trying to dissuade students from pursuing the life of an influencer, which I'm not. The fact is that tens (or maybe hundreds?) of thousands of influencers (broadly defined) make enough money to live on. I'd guess that this is more than the number of people who make a living at being a pop star, but maybe less than those who make a living as a musician. 

Being an influencer, like being a pop star, seems to require "natural talent." There's only so much you can be taught about how to succeed in those realms, and a college classroom certainly isn't the place to learn it. Better to just watch some tutorial videos, go out there, and do it. And if you got it, you got it, and if you don't, you don't. I can't think of a reason not to pursue both paths - the path of the influencer and the path of the media professional - simultaneously, though the time demands of either path will eventually force you to decide. 

As the influencer phenomenon continues to age, we'll get better at answer these questions about that career: Do influencers get enough revenue coming in from their videos that were popular years before to make a living? Do sponsorship deals persist or do they dry up? What does the second (or third) act of the career of an influencer look like?

Sunday, September 10, 2023

Do people care who (or what) wrote this?

Generative A.I. as a writing tool has limitations. But what I've discovered over the past week is that my perceptions of those limitations can drastically change when I learn about a new way to use it. Before, I'd been giving ChatGPT fairly vague prompts: "Describe the town of Cottondale Alabama as a news article." Listening to a copy.ai prompt engineer on Freakonomics helped me understand that being more specific in your prompts about the length of the output ("500-1000 words") and the audience ("highly-educated audience") makes all the difference. 

The other key lesson is to think of writing with A.I. as an iterative collaboration: ask the program to generate five options, use your gold ol' fashioned human judgment to select the best one, then ask it to further refine or develop that option. If you find it to be boring, ask it to vary the sentence structure or generate five new metaphors for something and then pick the best one. I sensed that writing with generative A.I. could be more like a collaboration with a co-author than an enhanced version of auto-correct; this helped me to see what, exactly, that collaboration looks like, and how to effectively collaborate with the program. 

As the output got better and better, I wondered, "has anyone done a blind test of readers' ability to discern A.I.-assisted writing from purely human writing?" I'd heard of a few misleading journalistic stunts where writers trick readers into thinking that they're reading human writing when, in fact, they are not. But I'm looking for something more rigorous, something that compares readers' abilities to discern that difference across genres of writing: short news articles, poetry, short stories, long-form journalism, short documentary scripts, etc. It seems likely that readers will prefer the A.I.-assisted version in some cases, but it's important to know what types of cases those will be. 

I also wondered what our reactions - as readers and writers - to all of this. I can think of three metaphors for possible reactions to A.I.-assisted writing:

1) the word processor. It's use changed how writers write. It changed the output. Like most disruptive technologies, it was met with skepticism and hostility. But eventually, it was widely adopted. Young writers who hadn't grown up writing free-hand had an easier time adapting to this new way of writing. The technology became "domesticated" - normal to the point of being invisible, embedded in pre-existing structures of economy and society. 

2) machine generated art. Machines have been generating visual art for decades. Some of that art is indiscernible from human generated visual art. Some of it embodies the kinds of aesthetic characteristics that people value. And yet machine generated art has never risen beyond a small niche. The market for visual art largely rejects it, in part because those who enjoy art care about how it is created. Something about the person who created it and the process by which it was created is part of what they value about art. 

3) performance enhancing drugs. Output from A.I.-assisted writing is superior - in some cases far superior - to unaided human writing, and there is market demand for it - the public sets aside its qualms and embraces good writing regardless of how it came about. This situation is perceived by writers, some industries, and some governments as unfair or possibly dangerous, maybe in terms of what bad actors could do with such a tool or how profoundly disruptive its widespread use would be for economies and society. Therefore, they regulate it, discourage its use through public shaming, or, in some cases, explicitly forbid its use. 

The quality of A.I.-assisted writing's output is only part of what will determine its eventual place in our lives. The general public's reaction to it is another part worth paying attention to. 

Friday, August 25, 2023

An ethical case for using A.I. in creative domains

A few months after first considering the promise and threat of A.I. in creative domains, it's still the threats that are getting the most attention. I tend to hear less about the possibility that by allowing A.I. to be used widely (which helps it grow more sophisticated) we are hastening a machine-led apocalypse and more about what we would lose if we replaced human writers with A.I. It would be an obvious loss for the people who write for a living, but they make the case that it would be a loss for society. Creativity would decline, mediocrity would flourish, and we would lose the ineffable sense of humanity that makes great art. By taking the power to create out of the hands of the many writers and putting it in the hands of the few big tech companies, we would exacerbate inequality and consolidate control over culture. 

There are a few steps in this hypothetical process worth scrutinizing. First, this argument assumed that if A.I. is allowed to be used in a creative field (screenwriting, journalism, education), it will necessarily lead to the replacement of human labor. There's a market logic to this: if you owned a company and you could automate a process at a fraction of the cost of paying someone to do it, you would have to automate it. If you didn't, your competition would automate it, be able to produce an equivalent good or experience at a lower cost, charge consumers less for it, be a better value to shareholders as a publicly traded company, and put you out of business. You could point to examples of such things happening in the past as evidence of this logic (though I have to admit, I found it hard to find examples that used human communication rather than physical labor. I'd assumed chatbots had led to steep declines in customer service labor, but all I could find was editorials about how it will lead to steep declines and competing editorials about how customers find chatbots enraging and still demand human customer service agents). 

But I still have trouble thinking of this particular replacement-of-human-labor trajectory as inevitable. I can't help but think of A.I. as a tool that humans use rather than a replacement for humans, more like a word processor or the internet than a brain. I can't not see a future (and, honestly, a present) in which writers of all kinds use A.I. for parts of the writing process: formatting, idea generation, wordsmithing. Humans prompt the A.I., evaluate its output, edit it, combine it with something they generated, and share an attribution with the A.I. You could call this collaboration or you could call this supervision, depending on how optimistic or pessimistic you are, but the work that it generates is likely better than what A.I. generates on its own and it is generated faster than what humans generate on their own. But humans who prompt, edit, evaluate, and contribute to creating quality work are as necessary as they were before. They can still use that necessity to make their case when bargaining with corporate ownership. 

I also have trouble seeing a marketplace in which all content is generated by A.I. If the A.I. can only generate mediocre content, won't people recognize its mediocrity and prefer human-made creative work? It's hard not to see this particular facet of the argument against A.I. in creative fields as elitist snobbery - "of course the masses will choose the A.I.-generated dreck they're served! We highly-educated members of the creative class must save them by directing them toward 'True Art,' (which we just happened to create and have a financial stake in preserving)."

And that is an ethical argument for A.I. in creative fields that I have yet to hear: the argument that the class of people who are currently paid for being creative are protectionist. If they can just keep us thinking about Big Tech and the obscenely wealthy studio execs, we won't have to think about the vast number of smart, creative, compassionate people who happen to not know how to write well, or to write well in a particular language. I worked hard at becoming a good writer, spending a lot of time and money to acquire this marketable skill. Does that make it morally right to deprive others of the ability to use a writing tool that levels the creative playing field? I assume there are millions of people with the life experience and creativity to be great writers who simply lack the educational experience to craft grammatically correct prose. Who am I to insist they take out loans and wait years before they can make worthy artistic contributions?

I do understand the replacement-of-human-labor argument against A.I. None of the anti-protectionism argument really resolves or even speaks to the market logic argument. I suppose this is what smart regulation does - limit the use of technology in cases where we see clear evidence of social harm but allow it where there are opportunities for social good. As an educator, I want to make sure that students understand how to recognize the characteristics of "good" (i.e., compelling, effective at communicating, lasting the test of time) writing, even if they need a little help getting their subjects and verbs to agree.

It can be hard to see the good of A.I. in creative realms at this stage in the development cycle. It is hard to see the would-be writers and the untold stories, but any ethical approach to the question of A.I. in creative fields must consider them. 

Sunday, August 20, 2023

Types of audience fragmentation

 I'm embarking on a new large-scale project relating to audience fragmentation. Or rather, I have been embarking on it for the past year - such is the leisurely pace of the post-tenure research agenda. It started as a refutation of the echo chamber as an intuitive but overly simplistic characterization of audiences' media diets in the age of information abundance. Then I realized that someone already wrote that book

In researching the idea, I was surprised to find how few studies about fragmenting audiences and echo chambers even tried to capture what I felt was the right kind of data: data capturing the whole of people's media diets - not aggregate audience data, not what individual users post on a particular platform, not even the amount of time or what individuals see on a particular platform, but ALL of what they see across all platforms and media. Unless you capture that, you really have no way of knowing whether individuals have any overlap with one another in what content they consume and/or how many of them are sequestering themselves in ideologically polarized echo chambers. 

In defense of researchers, this is a hard kind of data to get. What media content people consume is often a private matter. It's just hard - for an academic researcher, a company, a government - to get people to trust them enough to get that data. Observing people might cause them to change their behavior. Still, some researchers have made in-roads - working with representative samples, trying to get precise, granular data on precisely what content people are seeing - and I think that if we start to piece together what they have gathered and supplement it with new data, we'll be able to get a better sense of what audience fragmentation actually looks like. 

I've started the process of collecting that data. In a survey, I've asked a sample of college students to post URLs of the last 10 TikTok videos they watched, the last 10 YouTube videos they watched, and the last 10 streaming or TV shows they watched. I'm anticipating that there will be more overlap in the TV data than in the YouTube of TikTok data. But I wonder what counts as meaningful when it comes to overlap or fragmentation. I return to the age-old question: so what?

Let's say you have a group of 100 people. In one scenario, 50 of them watch NFL highlight videos, 25 watch far-right propaganda videos, and 25 watch far-left propaganda videos. In another scenario, all 100 of them watch 100 different videos about knitting. The latter audience, as a whole, is more fragmented than the former audience. The former is more polarized in terms of the content it consumes - half of the sample can be said to occupy echo chambers, either on the right or left. 

It's clear to me while the polarization of media diets matters - it likely drives political violence, instability, etc. But why does fragmentation, in and of itself, matter? 

I guess one fear is that we will no longer have any common experiences, and that will make it harder to feel like we all live in the same society - not as bad as being ideologically polarized, but it's plausible to think that it might lead to a lack of empathy or understanding. But what counts as a common experience? Do we have to have consumed the same media text? Stuart Hall would tell you, in case you didn't already know, that different people watching the same TV episode can process it in different ways, leading to different outcomes. But at least there would be some common ground or experience. 

But what if we watched the same genre of television show, or watched the same type of video (e.g., videos about knitting)? If we contrast the 100 people who all watched different knitting videos to 100 people who all watched videos about 100 very different topics (e.g., knitting, fistfights, European history, coding, basketball highlights, lifestyle porn, etc.), I would think that the former group would have more to talk about - more common ground and experience - than the latter, despite the fact that there is an equal amount of overlap (which is to say, no overlap) in terms of each discrete video they watched. 

Instead of just looking at fragmentation across discrete texts, it would also be useful to look at it across genres or types. It could get tricky determining what qualifies as a meaningful genre or TikTok video. Some TikTok videos share a set of aesthetic conventions but may not convey the same set of values, or vice versa. There will be some similarities across the texts in people's media diets, even if there is no overlap in the discrete texts. The challenge now is to decide what similarities are meaningful

Wednesday, July 12, 2023

Micro-blogging, Take 2

As a social media platform, you know you've achieved success when others start cloning you. It's easy to call to mind the successful copycat platforms that, in several cases, far exceeded their predecessors:  Facebook (MySpace, Friendster), Reddit (Digg). It's a bit harder to recall the many clones that never make it (Voat, Google+, Orkut), typically because the network effects that are intrinsic to platforms' success put those with small userbases at a distinct disadvantage or because they lack the infrastructure and/or revenue to support a rapidly growing userbase. In other words, there typically aren't enough people to make the place interesting or valuable, or there are too many people to keep running/moderating the platform for free. 

But Meta/Facebook/Instagram's introduction of Threads is different in this regard, giving us a chance to see what a clone could do if it didn't have to worry about those two problems. Threads has already successfully ported 100 million users from Instagram, maintaining the network structure among interest/affinity groups and connections between established influencers and their audiences. It also has Meta's massive infrastructure at its disposal - growth won't be a problem. And so we have a rare opportunity to see if this version of a micro-blogging platform - already operating at a scale similar to the existing leader, Twitter - will be all that different than what came before. 

Mark Zuckerberg has pitched Threads as a friendlier version of Twitter. Broad generalizations about the emotional valence of any social space are inherently oversimplifying - you can find pockets of friendliness and hostility among virtually any large group of people, online or offline. Still, it's entirely possible that one space could have the tendency to be friendlier than another - that's an empirically testable claim (provided you can agree on how to measure "friendliness"). 

Before trying to determine whether Threads has or is likely to achieve this goal (or whether such a goal is desirable, or if friendliness and ideological heterogeneity are mutually exclusive), it's worth considering how it might go about achieving it. Most obviously, more content moderation might tamp down overt hostility. Less obviously, there are facets of the platform that affect linkages among users - which users' posts are visible to other users. 

By importing lists of followers and popular accounts from Instagram, Threads imported a set of cultural norms, one that evolved over the last decade and privileged attractive or attention-getting still images over words, audio, or video. Broadly speaking, there's a kind of showy-ness to Instagram, a content ranking system that rewards positivity (some would argue to toxic levels). Then there's the sociotechnical context in which Threads is being deployed - as a kind of antidote to Twitter's perceived problem with negativity, conflict, and abuse. If Twitter wasn't an especially friendly place before Elon Musk took it over, it is much less so now. This might create demand for such a place, which Threads is well positioned to serve.

Then there's that pesky algorithm - the necessarily obscure formula that controls which posts appear at the top of your feed. Despite widespread skepticism toward algorithms, its hard to imagine a popular social media platform without one, particular one that aspires not to link small groups of people together (e.g., Facebook, Discord, GroupMe, and the way some people use Snapchat) but to give everyone the chance - however remote - to command an audience. Imagine ranking YouTube or TikTok videos chronologically, or at random. Some weighted combination or popularity, engagement (e.g., number of comments or shares), and predicted affinity (amount of time you've spent on similar posts) is the best way to keep users coming back for more. 

One way to go about ranking posts is to defer to the masses - showcase whatever is broadly popular, as Twitter does with its prominently displayed list of trending topics. Another way is to tailor it to each user's preferences - the niche approach favored by TikTok. The first kind of ranking creates a "main character of the day" a target for attention and ridicule on and beyond the platform. The second kind, supposedly, creates echo chambers (though evidence is mounting that, as intuitive as this understanding of personalized ranking is, it doesn't fit what most users actually see on social media). Inheriting its structure from Instagram, Threads seems to privilege, as Kyle Chayka put it, banal celebrities and self-branding. As masspersonal media where any user can potentially reach millions of other users, Threads cannot help but encourage a kind of performativity over connecting with a small group. 

Then there's the shift from images and short video to text. The whole reason Threads is being talked about as a Twitter clone is because its primarily intended to be used for mass conversation. In their branching/nested structure (you can reply to a reply to a post, with each reply "nested" under the previous message), conversations on Threads resemble conversations on Reddit, and it will be interesting to see if future designs of Threads nudge users to engage more in the replies.

But I wonder about the brevity of text and what that does to conversations. The whole point of Twitter - what put the "micro" in "micro-blogging" - was the character limit (originally 140, upped to 280). It's well-suited to a fragmented attention universe, but I wonder if the tone of Twitter (witty, sure, but also mean) is an inevitable symptom of its mandatory brevity. Is there something about short-form writing that is bound to regress toward snark? Is that simply the nature of the medium, regardless of the combination of people and level of moderation? That's what Threads might give us a chance to observe.