• 2 Posts
  • 209 Comments
Joined 2 years ago
cake
Cake day: September 27th, 2023

help-circle
  • And also because Animate Dead, the spell the blurb in the meme is from, reads:

    Choose a pile of bones or a corpse of a Medium or Small humanoid within range. Your spell imbues the target with a foul mimicry of life, raising it as an undead creature. The target becomes a skeleton if you chose bones or a zombie if you chose a corpse (the DM has the creature’s game statistics).


  • Mirodir@discuss.tchncs.detoTechnologyIt's rude to show AI output to people
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    6 days ago

    On the second part. That is only half true. Yes, there are LLMs out there that search the internet and summarize and reference some websites they find.

    However, it is not rare that they add their own “info” to it, even though it’s not in the given source at all. If you use it to get sources and then read those instead, sure. But the output of the LLM itself should still be taken with a HUGE grain of salt and not be relied on at all if it’s critical, even if it puts a nice citation.



  • In addition to what other people already said, without looking at the actual percentages, this could also just be random fluctuation.
    Mostly Positive is 70-79%, Mixed is 40-69%. If a game teeters around the 70% mark, it can easily cross the threshold separating the two due to pure chance, in either direction.


  • Mirodir@discuss.tchncs.detomemesTotally
    link
    fedilink
    arrow-up
    24
    ·
    12 days ago

    Exactly, only twice as common. To put in other words: For every two times someone says “free as a bird”, one person says “happy as a clam”.

    That is much narrower than the gap between something commonly said and something rarely said.



  • What’s also kinda wild is how those plans often have 0 interest rate as long as you’re able to pay the installments on time. Which means in theory you MAKE money by using them because you can earn interest with that money in the meantime.

    It ALSO means they know the people using those services are so bad with money that they can sustain themselves (and make a nice profit) purely by their clients failing to pay on time and then selling the debt to debt collectors. It’s absolutely disgusting how predatory this is, making their money mostly on the people who’d need such a system the most (and to a smaller amount, on people who don’t care).








  • (because it was trained on real people who write with those quirks)

    Yes and no. Generally speaking, ML-Models are pulling towards the average and away from the extremes, meanwhile most people have weird quirks when they write. (For example my overuse of (), too many , instead of . and probably a few other things I’m unaware of)

    To make a completely different example, if you average the facial features of humans in a large group (size, position, orientation, etc. of everything) you get a conventionally very attractive person. But very, very few people are actually close to that ideal. This is because the average person, meaning a random person, has a few features that stray far from this ideal. Just by the sheer number of features, there’s a high chance some will end up out of bounds.

    A ML-Model will generally be punished during training for creating anything that contains such extremes, so the very human thing of being eccentric in any regards is trained away. If you’ve ever seen people generate anime-waifus with modern generative models you know exactly what I mean. Some methods can and are being deployed to try and keep/bring back those eccentricities, at least when asked for.

    On top of that, modern LLM chatbots have reinforcement learning part, where they learn how to write so that readers will enjoy reading it, which is no longer copying but instead “inventing” in a more trial-and-error style. Think of the videos on youtube you’ve seen of “AI learns to play x game”, where no training material of someone actually playing the game was used and the model still learned. I’m assuming that’s where the overuse of em-dash and quippy one liners come from. They were probably liked by either the human testers or the automated judges trained on the human feedback used in that process.





  • Different person here.

    For me the big disqualifying factor is that LLMs don’t have any mutable state.

    We humans have a part of our brain that can change our state from one to another as a reaction to input (through hormones, memories, etc). Some of those state changes are reversible, others aren’t. Some can be done consciously, some can be influenced consciously, some are entirely subconscious. This is also true for most animals we have observed. We can change their states through various means. In my opinion, this is a prerequisite in order to feel anything.

    Once we use models with bits dedicated to such functionality, it’ll become a lot harder for me personally to argue against them having “feelings”, especially because in my worldview, continuity is not a prerequisite, and instead mostly an illusion.


  • I’m not them but for me “social media” in the colloquial use has some sort of discoverability and some functionality to put out a piece of media publically in a way that can then be discovered. (Note that this isn’t my entire definition, just the part where I feel email is disqualified.)

    For emails you need external services to find, subscribe and/or manage things such as mailinglists to sorta approach this behavior.