Tonight I want to talk, briefly, about rule enforcement on social media sites. In light of what seems like announcements dueling for “Worst Decision of the Year” awards, Twitter this week reinstated (then suspended again) the account of an avowed white supremacist, followed by Facebook and Instagram’s parent company, Meta, announcing that they will be allowing the 45th President of the United States back on their platforms.
I’ve touched briefly on the topic of content moderation here in the past, so tonight, I want to share a little nuance about how we approached one aspect of suspensions inside Twitter when I was there. One of the first tasks I had as a content strategist after moving to the Design and Research team was to help our Trust and Safety folks update our enforcement emails, the ones people get explaining why their accounts had been suspended. One important part of the task included finding the right balance between our newly updated brand voice and a stern but understanding tone.
To start, we took the finite list of policy offenses, along with the number and degrees of infractions involved, and started auditing the existing emails, prioritizing the ones we sent the most, first. As we started going through them, we noticed that we could streamline what we were saying, and to whom, by creating templates which accounted for severity (like first offenses) and the offense itself (like spamming links to people). We started to create templates with places for variables to be programmed in, such as the name of the policy people had broken and the number of times the account had violated it. For a good deal of the most frequently violated policies, this work was pretty easy: Revise the version we had with the new brand voice in mind, test it with the policy variations inserted, and then move on to the next. But here’s where doing work at Twitter gets interesting.
Even as we were undergoing this transformation, our policies were changing. And for any social media platform you want to be a part of, they should always be changing. No service is going to be able to create a set of policies on launch day which will suffice for any good amount of time without evolving. Because the way people use those platforms will evolve. In good ways and in bad. The policy enforcement teams, hopefully working in collaboration with a lot of other people, both inside and outside of the company, need to stay on top of things like trend manipulation, spam, impersonation, and a whole host of other ways of weaponizing product features.
Those product features, more accurately the people who are pitching, designing, and developing them, also need to be able to anticipate the number of ways a new or iterated feature can be used for harm, and either build in mitigation factors or re-evaluate launching altogether. It becomes the responsibility for each member of the team to look at what they’re building and ask not just “Who is this for?”, but “Who could this harm?”.
As we step back and think about how we’re building the social web, and what foundational decisions we continue to build upon, I am starting to wonder why we can’t move away from engagement-based metrics and towards something more benevolent. But there’s obviously no shareholder value in curiosity or community or news literacy. Unless and until we can create incentives which reward civility, we are going to keep recreating the scenarios which have ushered in today’s separatism. Especially if Web 2.0 platforms keep allowing the worst of us to drive the conversations for the rest of us.
Pretty Noose
26 January 2023
Tonight I want to talk, briefly, about rule enforcement on social media sites. In light of what seems like announcements dueling for “Worst Decision of the Year” awards, Twitter this week reinstated (then suspended again) the account of an avowed white supremacist, followed by Facebook and Instagram’s parent company, Meta, announcing that they will be allowing the 45th President of the United States back on their platforms.
I’ve touched briefly on the topic of content moderation here in the past, so tonight, I want to share a little nuance about how we approached one aspect of suspensions inside Twitter when I was there. One of the first tasks I had as a content strategist after moving to the Design and Research team was to help our Trust and Safety folks update our enforcement emails, the ones people get explaining why their accounts had been suspended. One important part of the task included finding the right balance between our newly updated brand voice and a stern but understanding tone.
To start, we took the finite list of policy offenses, along with the number and degrees of infractions involved, and started auditing the existing emails, prioritizing the ones we sent the most, first. As we started going through them, we noticed that we could streamline what we were saying, and to whom, by creating templates which accounted for severity (like first offenses) and the offense itself (like spamming links to people). We started to create templates with places for variables to be programmed in, such as the name of the policy people had broken and the number of times the account had violated it. For a good deal of the most frequently violated policies, this work was pretty easy: Revise the version we had with the new brand voice in mind, test it with the policy variations inserted, and then move on to the next. But here’s where doing work at Twitter gets interesting.
Even as we were undergoing this transformation, our policies were changing. And for any social media platform you want to be a part of, they should always be changing. No service is going to be able to create a set of policies on launch day which will suffice for any good amount of time without evolving. Because the way people use those platforms will evolve. In good ways and in bad. The policy enforcement teams, hopefully working in collaboration with a lot of other people, both inside and outside of the company, need to stay on top of things like trend manipulation, spam, impersonation, and a whole host of other ways of weaponizing product features.
Those product features, more accurately the people who are pitching, designing, and developing them, also need to be able to anticipate the number of ways a new or iterated feature can be used for harm, and either build in mitigation factors or re-evaluate launching altogether. It becomes the responsibility for each member of the team to look at what they’re building and ask not just “Who is this for?”, but “Who could this harm?”.
As we step back and think about how we’re building the social web, and what foundational decisions we continue to build upon, I am starting to wonder why we can’t move away from engagement-based metrics and towards something more benevolent. But there’s obviously no shareholder value in curiosity or community or news literacy. Unless and until we can create incentives which reward civility, we are going to keep recreating the scenarios which have ushered in today’s separatism. Especially if Web 2.0 platforms keep allowing the worst of us to drive the conversations for the rest of us.
See you tomorrow?