• 0 Posts
  • 33 Comments
Joined 11 months ago
cake
Cake day: November 21st, 2024

help-circle

  • chaonaut@lemmy.4d2.orgtoLemmy Shitpost5 tomatoes
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    29 days ago

    Because there’s a extra system of measurement change hiding in the middle. The Inches, Feet and Yards system (with the familiar 12:1 and 3:1 ratios we know and love), and Rods, Chains, Furlongs and Miles system. Their conversation rates are generally “nice”, with ratios of 4 rods : 1 chain, 10 chains : 1 furlong, and 8 furlongs : 1 mile.

    So where do we get 5,280 with prime factors of 2^5, 3, 5 and 11? Because a chain is 22 yards long. Why? Because somewhere along the line, inches, feet and yards went to a smaller standard, and the nice round 5 yards per rods became 5 and 1/2 yards per rod. Instead of a mile containing 4,800 feet (with quarters, twelfths and hundredths of miles all being nice round numbers of feet), it contained an extra 480 feet that were 1/11th smaller than the old feet.



  • It’s kinda ridiculous how out of touch the “why aren’t Americans regularly shooting politicians” crowd is. Like, just completely ignoring how militarized American law enforcement is and treating the gun ownership rates as evenly distributed instead of recognizing that the same people who support Trump are the sorts to own 20-100 guns.

    Hell, it wouldn’t surprise me if encouraging leftists to go out and buy guns to kill politicians is straight out of the COINTELPRO playbook, given what the FBI has been doing to the Muslim communities in the US. But no, by all means, they should keep ignoring the organizing and pushback that has actually been happening and keep baying for blood, regardless of whose



  • It’s like supporting those companies, voting for politicians who support them then deny your responsability for that.

    It really isn’t, particularly for those of us who have been getting yelled at for doing exactly not that, and being told that not having full-throated support for Harris when we were specifically told that the campaign didn’t need our support and locked out of speaking up. For those who have been told that our lack of support is why Trump got elected and Palestinians are being killed. Collapsing the entirety of electoral politics into “we voted for this” is harmfully reductive. We cannot keep telling ourselves that no matter what we do while working together, since the overall result was this it is our fault. It’s literally ignoring the actions of political opponents to blame ourselves no matter the outcome.

    Placing a blanket blame on voters for this is still just electoralism. Voting should be one political expression of many; reducing everything down to the outcome of an election–even if you’re blaming just those who voted–doesn’t build political movements.





  • I mean, I argue that we aren’t anywhere near AGI. Maybe we have a better chatbot and autocomplete than we did 20 years, but calling that AI? It doesn’t really track, does it? With how bad they are at navigating novel situations? With how much time, energy and data it takes to eek out just a tiny bit more model fitness? Sure, these tools are pretty amazing for what they are, but general intelligences, they are not.


  • It questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we’re really bad at defining what intelligence is and isn’t. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can’t agree to what is is, we can’t agree to what it can do.


  • I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we’ve been talking about with AI, and we’ve accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.


  • Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn’t really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that’s an awful long ways off from talking about AI itself (unless we’ve bought into the marketing hype).


  • Maybe the marketers should be a bit more picky about what they slap “AI” on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that’s just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.