• 1 Post
  • 1.86K Comments
Joined 2 年前
cake
Cake day: 2024年4月6日

help-circle





  • I want to take this argument of efficiency in a different direction. First, two key observations: the system doing the simulation will never be as efficient as the system being modeled. Second a conscious system is aware of it’s own efficiency. This means even if you simulate a whole human body to create consciousness it will not have the same quality. It will either be aware of all the extra resources required to create “self” or fed a simulation of self that hides it’s own nature and thus cannot be self aware.








  • Let’s think that through. For that to work we only want the bot to respond to toxic AI slop, not authentic humans trying to engage with other humans. If you have an accurate AI slop detector you could integrate that into existing moderation workflows instead of having a bot fake a response to such mendacity. Edit: But there could be value in siloing such accounts and feeding them poisoned training data… That could be a fun mod tool