Admin of lemmy.blahaj.zone

I can also be found on the microblog fediverse at @[email protected] or on matrix at @ada:chat.blahaj.zone

  • 349 Posts
  • 3.78K Comments
Joined 3 years ago
cake
Cake day: January 2nd, 2023

help-circle











  • The thing is though, no one actually changes their opinion because Musk spouts contradictory bullshit. His fellow transphobes just see him pushing transphobia, and don’t care what he’s actually saying. For them as well, the point is the hate. The words are just as meaningless, whether it’s their words or someone elses. The only thing that matters, is that it pushes hate.

    For people that are already trans allies, his words don’t change anything.

    And for people who are on the fence, who don’t know how to feel about trans folk and their rights, musks words won’t change anything either. The thing that turns fences sitters in to supporters is seeing the human face of trans people. It’s the experience of trans people shifting from a hypothetical talking point in to real people, who are impacted by the hate. And again, that has very little to do with the words people use.





  • This is just regular moderation, though.

    It’s using the existing tool, but making a small portion of them (approving applications) available to a much larger pool of people

    it doesn’t resolve the question I raised about what happens when two instances disagree about whether an account is a bot.

    If the instance that hosts it doesn’t think it’s a bot, then it stays, but is blocked by the instance that does think its a bot.

    And if the instance that thinks its a bot also hosts it, it gets shut down.

    That is regular fediverse moderation


  • Yeah, but that’s after the fact, and after their content has federated to other instances.

    It doesn’t solve the bot problem, but just plays whack a mole with them, whilst creating an ever large amount of moderation work, due to it federating to multiple instances.

    Solving the bot problem means stopping the content from federating, which either means stopping the bot accounts from registering, or stopping them from federating until they’re known to be legit.


  • I mean, approving users, you just let your regular established users approve instance applications. All they need to do is stop the egregious bots from getting through. And if there is enough of them, the applications will be processed really quickly. If there is any doubt about an application, let them through, because they can be caught afterwards. And historical applications are already visible, and easily checked if someone has a complaint.

    And if you don’t like the idea of trusted users being able to moderate new accounts, you can tinker with that idea. Let accounts start posting before their application has been approved, but stop their content from federating outwards until an instance staff member approves them. It would let people post right away without requiring approval, and still get some interaction, but it would mitigate the damage that bots can do, by containing them to a single instance.

    My point is, there are options that could be implemented. The status quo of open sign ups, with a growing number of bots doesn’t have to be the unquestioned approach going forward.



  • Make sign ups require approval and create a “trusted user” permission level that lets the regular trusted users on the instance see and process pending sign up requests and suspend/delete brand new spam accounts (say under 24 hours old) that slip through the cracks. You can have dozens of people across all timezones capable of approving requests as the are made, and capable of shutting down the bots that slip through.

    Boom, bot problem solved