on fandom and content policing
So, listen.
While we’re all having a good laugh and/or panic at tumblr’s incompetent censorship implosion, I just want to take this opportunity to draw a parallel to a lot of the recent fandom wank about what content should or shouldn’t be allowed on AO3. Specifically: there’s a lot of people who want the Archive to ban particular types of fic, but who have no real understanding of how you would actually implement that in practice.
While there are legitimate arguments to be made about the unwisdom of tumblr’s soon-to-be-forbidden content choices - the whole “female-presenting nipples” thing and the apparent decision to prioritise banning tits over banning Nazis, for instance - the functional problem isn’t that they’ve decided to monitor specific types of content, but that they’ve got no sensible way of enacting their own policies. Quite clearly, you can’t entrust the process to bots: just today, I’ve seen flagged content that runs the gamut from Star Trek: TOS screenshots to paleo fish art to quilts to the entire chronic pain tag to a text post about a gay family member with AIDS - and at the same time, I’ve still been seeing porn gifs on my dash.
It’s absolute chaos, which is what happens when you try to outsource to programs the type of work that can only reliably be done by people - and even then, there’s still going to be bad or dubious or unpopular decisions made, because invariably, some things will need to be judged on a case by case basis, and people don’t always agree on where the needle should fall.
Now: consider that this is happening because tumblr is banning particular types of images. Images, at least, you can kiiiiinda moderate by bots, provided you’re using the bot-process as a filter to cut down on the amount of work done by actual humans, and also provided you’re willing to take a huge credibility hit given the poor initial accuracy of said bots, but: images. Bots can be sorta trained to recognise and sort those, right?
But the kind of AI sophistication you’d need to moderate all the content on a text-based site like AO3? That… yeah. That literally doesn’t exist, and going by tags and keywords wouldn’t help you either, because there’d be no handy way to distinguish what type of usage was present just on that basis alone. Posts about content generated by neural nets are hilarious precisely because our AI isn’t there yet, and based on what we’ve seen so far, we won’t be there for a good long while.
It’s a point I’ve made again and again, but I’m going to reiterate it here: it’s always easy to conjure up the most obvious, extreme and clear-cut examples of undesirable content when you’re discussing bans in theory, but in practice, you need to have a feasible means of enacting those rules with some degree of accuracy, speed and accountability that’s attainable within both budget and context, or else the whole thing becomes pointless.
On massive sites like AO3 and tumblr, the considerable expense of monitoring so much user-generated content with paid employees is, to a degree, obviated by the concept of tagging and blocking, the idea being that users can curate and control their own experience to avoid unpleasant material. There still needs to be oversight, of course - at absolute minimum, a code of conduct and a means of reporting those who violate it to a human authority in a position to enforce said code - but the thing is, given how much raw content accrues on social media and at what speed, you really need these policies to be in place, and actively enforced, from the get-go: otherwise, when you finally do start trying to moderate, you’ll have to wade through the entire site’s backlog while also trying to keep abreast of new content.
Facebook, which is a multi-billion dollar corporation, can afford to have paid human moderators in place for assessing content violations instead of relying on bots; however, it is also notoriously terrible at both following its own standards and setting them in the first place. To take an example salient to the tumblr mess, Facebook has an ongoing problem with how it handles breastfeeding posts, while its community standards regarding what counts as hate speech are, uhhh… Not Great. Twitter has similarly struggled with bot accounts proliferating during multiple recent elections and with the seemingly simple task of deplatforming Nazis - not because they can’t, but because they don’t want to take a quote-on-quote political stance, even for the sake of cleaning house.
It’s also because, quite frankly, neither Facebook nor Twitter were originally thought of as entities that would one day be ubiquitous and powerful enough to be used to sway elections; and when that capability was first realised by those with enough money and power to take advantage of it, there were no internal safeguards to stop it happening, and not nearly enough external comprehension of or appreciation for the risks among those in positions of authority to impose some in time to make a difference. Because even though time spent scrolling through social media passes like reverse dog years - which is to say, two hours can frequently feel like ten minutes - its impact is such that we fall into the trap of thinking that it’s been around forever, instead of being a really recent phenomenon. Facebook launched in 2004, YouTube in 2005, Twitter in 2006, tumblr in 2007, AO3 in 2009, Instagram in 2010, Snapchat in 2011, tinder in 2012, Discord in 2015. Even Livejournal, that precursor blog-and-fandom space, only began in 1999, with the purge of strikethrough happening in 2007. Long-term, we’re still running a global beta on How To Do Social Media Without Fucking Up, because this whole internet thing is still producing new iterations of old problems that we’ve never had to deal with in this medium before - or if so, then not on this scale, within whatever specific parameters apply to each site, in conjunction with whatever else is happening that’s relevant, with whatever tools or budget we have to hand. It is messy, and I really don’t see that changing anytime soon.
All of which is a way of saying that, while it’s far from impossible to moderate content on social media, you need to have actual humans doing it, a clear reporting process set up, a coherent set of rules, a willingness to enforce those rules consistently - or at least to explain the logic behind any changes or exceptions and then stand by them, too - and the humility to admit that, whatever you planned for your site to be at the outset, success will mean that it invariably grows beyond that mandate in potentially strange and unpredictable ways, which will in turn require active thought and anticipation on your part to successfully deal with.
Which is why, compared to what’s happening on other sites, the objections being raised about AO3 are so goddamn frustrating - because, right from the outset, it has had a clear set of rules: it’s just not one that various naysayers like. Content-wise, the whole idea of the tagging system, as stated in the user agreement, is that you enter at your own risk: you are meant to navigate your own experience using the tools the site has provided - tools it has constantly worked to upgrade as the site traffic has boomed exponentially - and there’s a reporting process in place for people who transgress otherwise. AO3 isn’t perfect - of course it isn’t - but it is coherent, which is exactly what tumblr, in enacting this weird nipple-purge, has failed to be.
Plus and also: the content on AO3 is fictional. As passionate as I am about the impact of stories on reality and vice versa, this is nonetheless a salient distinction to point out when discussing how to manage AO3 versus something like Twitter or tumblr. Different types of content require different types of moderation: the more variety in media formats and subject matter and the higher the level of complex, real-time, user-user interaction, the harder it is to manage - and, quite arguably, the more managing it requires in the first place. Whereas tumblr has reblogs, open inboxes and instant messaging, interactions on AO3 are limited to comments and that’s it: users can lock, moderate or throw their own comment threads open as they choose, and that, in turn, cuts down on how much active moderation is necessary.
tl;dr: moderating social media sites is actually a lot harder and more complicated than most people realise, and those lobbying for tighter content control in places like AO3 should look at how broad generalisations about what constitutes a Bad Post are backfiring now before claiming the whole thing is an easy fix.
It’s so incredibly odd to me to throw in “Twitter can’t even do something simple like deplatform clear Nazis!” after the part where you explain why that’s not easy at all.
I mean, can their image algorithms distinguish between:
- Actual Nazi propaganda;
- A Swastika on a piece of Asian art predating the 20th century;
- A Swastika with a slash through it on an antifa account;
- Actual Nazi propaganda, being posted by a holocaust museum or scholar with the aim of educating people on the horrors of Nazism (seriously their own propaganda discredits them incredibly well);
Can it do the same for text, which is apparently even more difficult?
After reading this primer, why on Earth would we believe that deplatforming Nazis was any easier than deplatforming Adult content?
You’re conflating two separate points here: firstly, that it is easier for Twitter to deplatform Nazis than it is for tumblr; and secondly, that tumblr is being criticised for prioritising a ban on NSFW content ahead of deplatforming Nazis, regardless of the difficulty.
Twitter, unlike tumblr, isn’t primarily reliant on algorithms to monitor content; it has a large number of actual, physical employees responding to reports of user violations and the proven ability to delete accounts en masse, as per their recent purge of bots. We know that Twitter is capable of deplatforming Nazis and white nationalists with a high degree of accuracy; they’re just really reluctant to do that, because of the unfortunate yet unavoidable fact that there’s a whole bunch of real, blue-check-verified politicians and public figures using their service, many with vocal support bases, who espouse or tacitly condone those beliefs. The problem here is that Twitter still thinks of itself as a ‘neutral’ platform, because it never considered being in a position to spread violent ideologies or political misinformation on this scale - but now that it is, and now that extreme right-wing positions are being increasingly normalised (in part because of its own negligence in letting those ideologies gain traction in the first place) it doesn’t want to be seen to sacrifice that neutrality by drawing what ought to be a basic line in the sand.
By comparison, what’s bothering people about the continued white nationalist/Nazi presence on tumblr during this purge - aside from, you know, the obvious - isn’t because we think the algorithms would somehow magically become more competent if directed towards Nazis, but because the decision to crack down on naked tits first makes it pretty clear that the priorities of those in charge are broken.
So, yes: Twitter and tumblr both face difficulties in deplatforming Nazis, but the types of difficulties in each case are radically different. Twitter’s problems are, in order of priority, the need to acknowledge that the platform itself cannot be and is not politically neutral, the need to establish a coherent set of guidelines for users going forward shaped in acknowledgement of this fact, and - last but absolutely not least - the need to figure out legal standpoints and protective measures for deplatforming politicians, given the inevitable blowback. Tumblr’s problems are a bunch of broken algorithms that are nowhere in the ballpark of working properly, an ongoing conflict between tumblr staff and the ultimate owners at Verizon, and the fact that they are still overrun with Nazis.
Hope that clears things up!
phoenixiancrystallist reblogged this from fozmeadows
phoenixiancrystallist liked this
fangirl-ramblings liked this inqnel liked this
xlillilith liked this korsithkoris liked this
lady-bellatrix reblogged this from fozmeadows
astraldepths liked this
incredizort liked this
oogelyboogely reblogged this from rainbowloliredux
oogelyboogely liked this
stormfern4 liked this oh-suketora reblogged this from fozmeadows
beroli liked this
slovenlish reblogged this from fozmeadows
stumblednorth liked this rollerskatinglizard liked this
joslynuniverse reblogged this from jellyfishdirigible
joslynuniverse liked this
sulkywerewolf reblogged this from meeedeee
malicethewriter reblogged this from gingerautie joyouslemniscate reblogged this from agentfreewill
muzmuzme liked this
mina-mauveine liked this
mangy-mongrel liked this greenjudy reblogged this from willowcatkinblossom
willowcatkinblossom liked this
listening-to-thunder reblogged this from fozmeadows
abagat liked this
floranna2 reblogged this from johanirae
bookwormoandfriends reblogged this from blogquantumreality perfectpatches reblogged this from gingerautie
blogquantumreality reblogged this from fozmeadows
blogquantumreality liked this
lancelotofthelake liked this
chiisana-sukima liked this
redattn liked this friendlyalien liked this
jajachaik liked this
space-coyote-judal reblogged this from mooncaps
amaranth42 reblogged this from ireneadonovan
ryuutchi liked this fozmeadows posted this
- Show more notes
