It was the week before the week before Christmas and my social media apprentice was buzzing: "We got followed by [redacted]!!!!" in the same proud tones used when a healthy eating tweet gets retweeted by someone off of Masterchef. The name (unfamiliar to me) was rapidly annotated with a brief bio; something BBC related, youth-orientated, popular. Good for us, and duly I congratulated her and gave her the go ahead to follow back. I checked out the celeb later that day. Her stream was healthy, wholesome and positive. She looked good. All good, carry on, carry on.
Two days later, the first of them arrived. Love your tweet! You have a new follower! The apprentice was on leave, settling into her new-new-build. So I let the first few go by. When I came back after lunch, there were 17 of them. And many of the profile pictures looked disturbingly... similarly... undressed.
We need to keep the stream clean as (in common with many professional users of Twitter) we are catering to the 13+ age-group. Kids and parents, professionals and teachers. Family friendly is the order of the day. It's pretty normal (if a bit annoying) to get a daily spatter of speculative marketing profile engagements - t-shirt sellers, lifestyle coaches, SEO-jockeys and the like. It's part of the Twitterverse.
This, though, this was something new, at least to me. As the song goes, new, and a bit alarming.
These accounts were not in the slightest bit family friendly. Each one was an identikit assemblage of quease-inducing porn clip-art, unsubtle 100 character come-ons, and links signalled clearly (using the same six or so unmistakable and nasty euphemisms) as leading to live-stream, hard-core pornography. They were all using the same phrases, the same images, and I had little doubt that their carefully scrambled link addresses (See LINK in BIO!!!!!??!!) were taking you to the same set of porn websites.
Weirdly, they all also seemed to be following a set of rules about what they were saying and showing. I was instantly reminded of the the ridiculous things people used to do to "get around" the Obscene Publications Act, missing the point that obscenity was intrinsic in what they were selling.
Every new like or follow now has to be checked. The process, once I stopped clicking around like an idiot looking for the right report route, smoothed down to six quick clicks: check > options > report > categorise > subcategorise > block. Then I have to click again to get off the page.
By five in the afternoon, having spent most of it blocking and reporting identikit profiles, the flow seemed to be dying down. I assumed that given that the profiles were pretty obviously generated by an algorithm, it had stripped my Twitters off their follow list and moved onto less active accounts.
I also had my first set of progress reports back from Twitter - and in case anyone is in any doubt about this, selling pornography on Twitter breaks its Ts & Cs. Every account I reported was closed down promptly.
As will surprise absolutely nobody who has ever been in this situation, the following morning, the accounts were back again, and since then, despite the accounts having been closed down and down and down by Twitter, they have returned again, and again. Generating at a less panic-inducing two to four every day per profile, they are now just another editor job. Retweet, post, favourite, post again ---- and block and report the prnbots.
After every block and ban, there is a small notification from Twitter: thank you for making Twitter safer for everyone.
I'm fond of Twitter, possibly for fuzzy historical reasons that have no place in our current, chiller world. So my first impulse is to worry about Twitter, particularly since my googling and talking to people made it clear that I'm hardly an isolated case. The bulk inactive account hacks and malicious profile generation has been going on since at least 2015. That means Twitter has a had a while to come up with a coordinated response, akin to the algorithmic system Facebook uses to delete spam posts as they happen. This hasn't happened, and I can only think of a few reasons why this would be, and none of them are good for Twitter.
Possibility one: something about Twitter's data architecture makes it impossible to create dynamic identification of accounts or tweets mass generated from a short list of clearly malicious phrases and images. Not good news for Twitter - does it really want to be relying on human report? Most users never bother to report anything.
Possibility two: the account generation is coming in at such a scale and level of technical complexity and adaptivity that it is flooding Twitter's defences and commercial users (and their customers) are seeing a little bit of the overflow of a much, much bigger problem. Again, not a good situation, given that Twitter has too little UI to just turn bits of itself off (as happened to Flickr's notes function, for example) until a suitable resolution is established.
Possibility three: Twitter is tolerating, allowing or even accepting their bots, the ones selling adult content included. It's maybe an indicator of how much of an image problem Twitter has that most people I spoke to assumed that they simply didn't care, or treated these accounts as a form of "free speech". I don't think that, but I think it's possible that they might see it as an environmental emanation of the medium, like flyposters on a hoarding or postcards in a phone booth.
I can't really, though, so. Report, report, report.
Two days later, the first of them arrived. Love your tweet! You have a new follower! The apprentice was on leave, settling into her new-new-build. So I let the first few go by. When I came back after lunch, there were 17 of them. And many of the profile pictures looked disturbingly... similarly... undressed.
Cleaning the Stream
We need to keep the stream clean as (in common with many professional users of Twitter) we are catering to the 13+ age-group. Kids and parents, professionals and teachers. Family friendly is the order of the day. It's pretty normal (if a bit annoying) to get a daily spatter of speculative marketing profile engagements - t-shirt sellers, lifestyle coaches, SEO-jockeys and the like. It's part of the Twitterverse.
This, though, this was something new, at least to me. As the song goes, new, and a bit alarming.
These accounts were not in the slightest bit family friendly. Each one was an identikit assemblage of quease-inducing porn clip-art, unsubtle 100 character come-ons, and links signalled clearly (using the same six or so unmistakable and nasty euphemisms) as leading to live-stream, hard-core pornography. They were all using the same phrases, the same images, and I had little doubt that their carefully scrambled link addresses (See LINK in BIO!!!!!??!!) were taking you to the same set of porn websites.
Weirdly, they all also seemed to be following a set of rules about what they were saying and showing. I was instantly reminded of the the ridiculous things people used to do to "get around" the Obscene Publications Act, missing the point that obscenity was intrinsic in what they were selling.
Sisyphus on the block and ban
By five in the afternoon, having spent most of it blocking and reporting identikit profiles, the flow seemed to be dying down. I assumed that given that the profiles were pretty obviously generated by an algorithm, it had stripped my Twitters off their follow list and moved onto less active accounts.
I also had my first set of progress reports back from Twitter - and in case anyone is in any doubt about this, selling pornography on Twitter breaks its Ts & Cs. Every account I reported was closed down promptly.
As will surprise absolutely nobody who has ever been in this situation, the following morning, the accounts were back again, and since then, despite the accounts having been closed down and down and down by Twitter, they have returned again, and again. Generating at a less panic-inducing two to four every day per profile, they are now just another editor job. Retweet, post, favourite, post again ---- and block and report the prnbots.
After every block and ban, there is a small notification from Twitter: thank you for making Twitter safer for everyone.
Drowning in a sea of slime
Possibility one: something about Twitter's data architecture makes it impossible to create dynamic identification of accounts or tweets mass generated from a short list of clearly malicious phrases and images. Not good news for Twitter - does it really want to be relying on human report? Most users never bother to report anything.
Possibility two: the account generation is coming in at such a scale and level of technical complexity and adaptivity that it is flooding Twitter's defences and commercial users (and their customers) are seeing a little bit of the overflow of a much, much bigger problem. Again, not a good situation, given that Twitter has too little UI to just turn bits of itself off (as happened to Flickr's notes function, for example) until a suitable resolution is established.
Possibility three: Twitter is tolerating, allowing or even accepting their bots, the ones selling adult content included. It's maybe an indicator of how much of an image problem Twitter has that most people I spoke to assumed that they simply didn't care, or treated these accounts as a form of "free speech". I don't think that, but I think it's possible that they might see it as an environmental emanation of the medium, like flyposters on a hoarding or postcards in a phone booth.
I can't really, though, so. Report, report, report.
No comments:
Post a Comment