

@diz OK, that would have prevented any escape 🙃
L’Etat, c’est moi
@diz OK, that would have prevented any escape 🙃
@HedyL @diz I kinda wonder if this would work better if it just was worded the other way round: “must be supervised always”
If I understand correctly, LLMs have difficulties encoding negative correlations (not, un-, …)
Edit: or maybe not, seeing it did this transformation already in the introduction and still lets the dog escape on the very first turn
@BlueMonday1984 lol @ “I try not to let [performance] considerations get in the way”
Also why do you even put a React Dev on that task 🤡
@IsThisAnAI dozing off at the keyboard however, it seems
deleted by creator
@blarth @TheThrillOfTime huh. You totally name at least one use case then, huh
You only didn’t because it’s so blindingly obvious
(It’s BS)
Also, learn about Luddites, man
@Matriks404 @dgerard got it in one! It’s MS’s marketing campaign for PCs with a certain amount of “AI” FLOPS
@philycheeze @xkbx I bought one anyway. 10 years later, mind you :p
@philycheeze @xkbx yes, I think Microslop’s fumble of selling the HD DVD drive only as an external add-on really hindered the format
@BlueMonday1984 I am dead. From Fremdschämen
@fasterandworse @dgerard I mean, it is absurd. But it is how it works: an LLM is a black box from a programming perspective, and you cannot directly control what it will output.
So you resort to pre-weighting certain keywords in the hope that it will nudge the system far enough in your desired direction.
There is no separation between code (what the provider wants it to do) and data (user inputs to operate on) in this application 🥴
@o7___o7 @bitofhope it is the cum sprite!
“Splash on your tits?”