LLMs are effectively optimized for bamboozling CEOs into mistaking them for intelligent activity, rather than autocomplete on steroids. And so the corporate leaders extrapolate from their own experience to that of their employees, and assume that anyone not sprinkling magic AI pixie dust on their work is obviously a dirty slacker or a luddite.

  • Ben Matthews@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    Some comments below the article are good. I like especially this one:

    One thing that seems to me to be shaking out of LLMs: if an LLM can do it well enough to pass muster, it probably didn’t need doing in the first place. Coming at it from a programming perspective: if an LLM can generate the required code, it shows that you could have better abstractions that describe your behavior with less code. If an LLM can generate a document, it’s a sign that the document isn’t necessary in the first place- it’s a bit of leftover make work that should be subsumed into a better process. And so on.