Tonight, I want to point y’all to a podcast episode from The Verge. First, there’s a long, detailed conversation between “Decoder” host, Nilay Patel, and the CEO of Automattic, Matt Mullenweg. After their discussion, Patel talks with The Verge’s deputy editor Alex Heath about what a few possible futures for Twitter will look like.
Although you’d do yourself a favor to listen to the entire thing, I do want to highlight part of the discussion between Patel and Mullenweg. Specifically the idea that there is a difference between creating policies and enforcing them. Now, there’s a lot of hokum going on right now around what the current Twitter owner is calling “the Twitter Files,” but what it reinforces — at least for me — is that policies are created by humans. That process can be messy. And enforcing those policies can be even messier.
One of the things I remember vividly from my time at Twitter is the careful and deliberate way we created and corrected the policies people using Twitter agreed to in order to use the service. Interlaced in that memory is how nimble those policies needed to be, because each day brought a new reality, both of what we were seeing on the platform, and also of how people were trying to evade them. So, as we worked to keep up, we also had to staff up and train people to enforce them. And that part took time.
If you know even the littlest bit about social media moderation or community management — to paraphrase part of Patel’s interview with Mullenweg — you know one thing for sure: These are not technology problems, they’re people problems. And no matter how good your training data or machine learning models are, you are going to have to account for both the bias of the people who built those systems as well as the ever-evolving people you’re trying to use them on. There will never be a “set it and forget it” software solution to content moderation, and the sooner every online community realizes that, the more material the moderation solutions will become.
Hunted Down
13 December 2022
Tonight, I want to point y’all to a podcast episode from The Verge. First, there’s a long, detailed conversation between “Decoder” host, Nilay Patel, and the CEO of Automattic, Matt Mullenweg. After their discussion, Patel talks with The Verge’s deputy editor Alex Heath about what a few possible futures for Twitter will look like.
Although you’d do yourself a favor to listen to the entire thing, I do want to highlight part of the discussion between Patel and Mullenweg. Specifically the idea that there is a difference between creating policies and enforcing them. Now, there’s a lot of hokum going on right now around what the current Twitter owner is calling “the Twitter Files,” but what it reinforces — at least for me — is that policies are created by humans. That process can be messy. And enforcing those policies can be even messier.
One of the things I remember vividly from my time at Twitter is the careful and deliberate way we created and corrected the policies people using Twitter agreed to in order to use the service. Interlaced in that memory is how nimble those policies needed to be, because each day brought a new reality, both of what we were seeing on the platform, and also of how people were trying to evade them. So, as we worked to keep up, we also had to staff up and train people to enforce them. And that part took time.
If you know even the littlest bit about social media moderation or community management — to paraphrase part of Patel’s interview with Mullenweg — you know one thing for sure: These are not technology problems, they’re people problems. And no matter how good your training data or machine learning models are, you are going to have to account for both the bias of the people who built those systems as well as the ever-evolving people you’re trying to use them on. There will never be a “set it and forget it” software solution to content moderation, and the sooner every online community realizes that, the more material the moderation solutions will become.
See you tomorrow?