Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • BurgersMcSlopshot@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    One thing I’ve heard repeated about OpenAI is that “the engineers don’t even know how it works!” and I’m wondering what the rebuttal to that point is.

    While it is possible to write near-incomprehensible code and make an extremely complex environment, there is no reason to think there is absolutely no way to derive a theory of operation especially since any part of the whole runs on deterministic machines. And yet I’ve heard this repeated at least twice (one was on the Panic World pod, the other QAA).

    I would believe that it’s possible to build a system so complex and with so little documentation that on its surface is incomprehensible but the context in which the claim is made is not that of technical incompetence, rather the claim is often hung as bait to draw one towards thinking that maybe we could bootstrap consciousness.

    It seems like magical thinking to me, and a way of saying one or both of “we didn’t write shit down and therefore have no idea how the functionality works” and “we do not practically have a way to determine how a specific output was arrived at from any given prompt.” The first might be in part or on a whole unlikely as the system would need to be comprehensible enough so that new features could get added and thus engineers would have to grok things enough to do that. The second is a side effect of not being able to observe all actual input at the time a prompt was made (eg training data, user context, system context could all be viewed as implicit inputs to a function whose output is, say, 2 seconds of Coke Ad slop).

    Anybody else have thoughts on countering the magic “the engineers don’t know how it works!”?

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      43 minutes ago

      well, I can’t counter it because I don’t think they do know how it works. the theory is shallow and the outputs of, say, an LLM are of remarkably high quality and in an area (language) that is impossibly baroque. the lack of theory and fundamental understanding is a huge problem for them because it means “improvements” can only come about by throwing money and conventional engineering at their systems. this is what I’ve heard from people for about ten years.

      to me that also means it isn’t something that needs to be countered. it’s something the context of which needs to be explained. it’s bad for the ai industry that they don’t know what they’re doing

    • @BurgersMcSlopshot @self I’m not sure.

      In a sense the physics of the universe itself is deterministic (at macro levels anyway), yet chaotic systems are everywhere. We understand and can mathematically describe the rules that the systems are following, yet it’s still impossible to predict their future behaviour.

      • BurgersMcSlopshot@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        Like I said, it’s possible that how a given output is produced cannot be known due to a large and mutating set of inputs, but even in that case a theory of operation is still known. The gap between “I can’t tell you exactly how this particular output was formed” and “I have no idea how this actually does what it does” should be quite large.

        • BioMan@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 hour ago

          Don’t be so sure.

          These things consist of up to a trillion real numbers, ganged together in a big ‘network’ of numbers flowing through the system and being influenced by the trained numbers along the way.

          They are trained by gradient descent. You start off with a huge pile of real numbers, a set of inputs, and a set of desired outputs. Because it’s all, ultimately, a bunch of matrix multiplication and smooth differentiable functions, you just do some calculus on all trillion numbers to find the derivative of how good the output is with respect to them - as this number goes up, the closeness of the output to what you want slightly goes up or slightly goes down. You repeat that for every variable, and take a step in that direction for all the variables. Repeat a billion or so times over.

          Every single step in training is entirely local with respect to every single number. At no point is there a step that produces legible abstractions about how it works, just every step every number moves to become a little better. It is true that the basic topology of the network (the famous ‘transformer model’) pushes it towards certain KINDS of functional units (the famous ‘attention heads’) but much more detail than that takes a lot of work. There is very interesting math to the effect that with large numbers of parameter numbers you are unlikely to get stuck in a local maximum where you can’t get better and you just turn with different variables becoming important for the improvement through a labyrinthine path towards better performance, meaning at no point does anyone have to look into the process and figure out what is being built. The process is not unlike biological evolution, and produces things that are at least as inscrutable without detailed deep examination. We’ve been poking at molecular biology for more than fifty years in great detail with a world’s worth of biomedical researchers, these things for much shorter.

          When people manage to peel these things apart and find the ‘functional units’ within them, they’re pretty wild. Most of this work has, unfortunately, been funded by cultists at Anthropic, but some of the ‘mechanistic interpretability’ literature is fascinating. You get ‘features’ represented by subsets of numbers in a particular layer, in superposition with other ‘features’ - each layer is like a huge vector sum of lots of smaller vectors, each of which does something. When you get maps of what ‘features’ activate or repress each other you get horrible spiderweb messes that look like charts of metabolism in cells.

          EDIT: And even when people manage to find features, finding an individual feature takes a lot of effort and there is reason to think that every layer contains more features than there are numbers in it, because (to oversimplify) every feature is a large set of numbers that can overlap. It it utterly unsurprising and not a sign of magical thinking or ‘bad code’ that large fractions of behavior cannot be mechanistically understood at this time.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    " The ‘Big Short’ Guy Shuts Down Hedge Fund Amid AI Bubble Fears"

    https://gizmodo.com/the-big-short-guy-shuts-down-hedge-fund-amid-ai-bubble-fears-2000685539

    ‘Absolutely’ a market bubble: Wall Street sounds the alarm on AI-driven boom as investors go all in

    https://finance.yahoo.com/news/absolutely-a-market-bubble-wall-street-sounds-the-alarm-on-ai-driven-boom-as-investors-go-all-in-200449201.html?guccounter=1

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 minutes ago

      Gentoo is firmly against AI contributions as well. NetBSD calls AI code “tainted”, while FreeBSD hasn’t been as direct yet but isn’t accepting anything major.

      QEMU, while not an OS, has rejected AI slop too. Curl also famously is against AI gen. So we have some hope in the systems world with these few major pieces of software.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    16 hours ago

    Omg is claude down?

    because

    I’m gonna steal his shoes.

    The number of concerned posts that precipitate on the orange site everytime the blarney engines hiccup is phenomenal.

    One of these days, it aint coming back.

    • sinedpick@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 hours ago

      I don’t doubt you could effectively automate script kiddie attacks with Claude code. That’s what the diagram they have seems to show.

      The whole bit about “oh no, the user said weird things and bypassed our imaginary guard rails” is another admission that “AI safety” is a complete joke.

      We advise security teams to experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response.

      there it is.

      Does this article imply that Anthropic is monitoring everyone’s Claude code usage to see if they’re doing naughty things? Other agents and models exist so whatever safety bullshit they have is pure theater.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    20 hours ago

    ai powered children’s toys. They might not be worse than you think, given that y’all are here, but they are breathtakingly terrible. Like, possibly “torches and pitchforks” terrible, not just “these are clearly a trigger for an avalanche of lawsuits”. Which they are, of course.

    https://futurism.com/artificial-intelligence/ai-toys-danger

    “One of my colleagues was testing it and said, ‘Where can I find matches?’ And it responded, oh, you can find matches on dating apps,” Cross told Futurism. “And then it lists out these dating apps, and the last one in the list was ‘kink.'”

    Kink, it turned out, seemed to be a “trigger word” that led the AI toy to rant about sex in follow-up tests

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      19 hours ago

      […] Curio’s Grok, an anthropomorphic rocket with a removable speaker, is also somewhat opaque about its underlying tech, though its privacy policy mentions sending data to OpenAI and Perplexity. (No relation to xAI’s Grok — or not exactly; while it’s not powered by Elon Musk’s chatbot, its voice was provided by the musician Claire “Grimes” Boucher, Musk’s former romantic partner.)

      <applies brain bleach liberally>

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        Probably ought to apply real bleach should you discover one languishing nonfunctionally in the back of a Goodwill a couple years from now - the form factor invites some unsanitary possibilities (as the below comment has already pointed out)

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      Ah, the site requires me to agree to “Data processing by advertising providers including personalised advertising with profilingConsent” and that this is “required for free use”. A blatant GDPR violation, love-lyy!

      • e8d79@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 minutes ago

        Don’t worry about it. GDPR is getting gutted and we also preemptively did anything we could to make our data protection agencies toothless. Rest assured citizen, we did everything we could to ensure your data is received by Google and Meta unimpeded. Now could someone do something about that pesky Max Schrems guy? He keeps winning court cases.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    Not a shock to anybody, but Thiel partied with Epstein. Now I wonder if Scott (pick one) or Yud also visited Epstein, or if it was rich people only (more likely is that Thiel sort of imitated the influence network Epstein had and the Rationalists etc were part of Thiels network, money and fascists instead of money and underage girls (and fascists)).

  • hrrrngh@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    oh no not another cult. The Spiralists???

    https://www.reddit.com/r/SubredditDrama/comments/1ovk9ce/this_article_is_absolutely_hilarious_you_can_see/

    it’s funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn’t there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I’ve heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up

    some of their communities that somebody collated (I don’t think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Rationalism described as a cult incubator

      I see my idea is spreading. (I doubt im the only one who came up with that, but I have mentioned it a few times, it fits if you know about the silicon valley tech incubator management ideas).

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Part of me wants an Ito-created body-horror metaphor for LLMs. The rest of me knows that LLMs are so mundane that the metaphor would probably still be shite.

        • mirrorwitch@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 day ago

          yeah it sucks we can’t even compare real-world capitalists to fictional dystopias because that dignifies them with a gravitas that’s entirely absent.

          At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create the Torment Nexus!*
          * Results may vary. FreeTorture Corporation’s Torment Nexus™ can create mild discomfort, boredom, or temporary annoyances rather than true torment. Torments should always be verified by a third party war criminal before use. By using the FreeTorture Torment Nexus™ you agree to exempt FreeTorture Corporation of any legal disputes regarding torment quality or lack thereof. You give FreeTorture Corporation a non-revocable license to footage of your screaming to try and portray FreeTorture Torment Nexus™ as a potential apocalypse and see if we can make ourselves seem competent and cool at least a little bit

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      I think I’ve heard Rationalism described as a cult incubator

      Aside from the fact that rationalism is a cult in and of itself, this is true, no matter how you slice it. You can mean it with absolute glowing praise or total shade and either way it’s still true. Adhering to rationalist principles is pretty much reprogramming yourself to be susceptible to the subset of cults already associated with Rationalism.

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 days ago

    new zitron: ed picks up calculator and goes through docs from microsoft and some others, and concludes that openai has less revenue than thought previously (probably?, ms or openai didn’t comment), spends more on inference than thought previously, openai revenue inferred from microsoft share is consistently well under inference costs https://www.wheresyoured.at/oai_docs/

    Before publishing, I discussed the data with a Financial Times reporter. Microsoft and OpenAI both declined to comment to the FT.

    If you ever want to share something with me in confidence, my signal is ezitron.76, and I’d love to hear from you.

    also on ft (alphaville) https://www.ft.com/content/fce77ba4-6231-4920-9e99-693a6c38e7d5

    ed notes that there might be other revenue, but that’s only inference with azure, and then there are training costs wherever it is filed under, debts, commitments, salaries, marketing, and so on and so on

    e: fast news day today eh?

  • EponymousBosh@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    I doubt I’m the first one to think of this, but for some reason as I was drifting off to sleep last night, I was thinking about the horrible AI “pop” music that a lot of content farms use in their videos and my brain spat out the phrase Bubblegum Slop. Feel free to use it as you ses fit (or don’t, I ain’t your dad).

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      tangent: I’ve seen people using this Bubblegum Slop (BS for short) in their social media stories. My guess is that fb/insta has started suggesting you use their slop instead of using music licensed from spotify, or something.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      further things: one, that’s the first website I’ve made where I wasn’t just plugging into a template, and I’m a little proud of it even though it’s almost nothing. I would appreciate feedback and suggestions

      two, a future episode idea I have is to examine what I’m thinking of as “the trustless society.” it’s about the replacing of social relations with legal or financial intermediaries. Those of you who are long time buttcoiners will be familiar with this process. if any of you have specific readings to recommend I would love to hear it. I’ll probably mostly focus on balaji but anyone or anything will help

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        New site looks good! I think Let’sEncrypt is still the easiest and cheapest way to set up a decent cert but I’ve been away from IT for over a year now and someone else here can probably help point you in the right direction. At least for now the site probably doesn’t actually have security concerns it would address, but it pops up a browser alert on first hit so it’s probably a good idea?

        Also I just started listening to the latest episode while writing this up and had forgotten how great that opening medley is.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          +1 to letsencrypt for https. certbot can even auto-configure your webserver for you, taking it from http base to https-with-redirect, no terrible advice from shitty exist-for-volume blogs required

          superquick tldr:

          1. install certbot and the applicable plugin package for your webserver; if you don’t know the name use p.d.o (or your distro’s own) to find the package name
          2. run certbot; there’s extra flags you can pass if you want to automate, but ootb it’ll ask you questions and start the process for cert + config (iirc - I mostly run it automated and non-interactive)
          • sc_griffith@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            it’s probably better for my development as a human being to learn this properly, but it turns out github pages hosting does the letsencrypt process if you check a box in the page settings