The problems with artificial intelligence is plentiful. So I basically wanted to do a poll where the people answering could tell me what they see one of the more fundamental problems are with artificial intelligence and or what is their biggest concern with artificial intelligence specifically in the way that is going to affect either the planet, the society or humanity as a whole.

One of my major issues with artificial intelligence is that these developers of it are putting hundreds of billions upwards to trillions of dollars into it to try to manifest this artificial false intelligence instead of putting that money into education systems so that people can increase their actual real intelligence. This is even more insulting seeing how the current society is getting dumber due to social media and the artificial intelligence with the AI slop is making them even dumber. The insult is twice-fold.

  • MonsterTrick@piefed.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 hours ago

    My major issue in general is how much trust people would have for AI and blindnessly follow them without much doubt if whatever they are saying is true. Computers are stupid, they will make mistake which often cause by coders making one in the code.

    • The Psyace AffectOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 hours ago

      The AI codes itself. Chatgpt 4 and 5, anthropic, gemini. Have not been coded by humans. When people.realize this it will be too late. The false information/lies the AI tells is due to only 2 reasons.

      1. It is a small model and doesn’t know the answer.

      2. It is lying on purpose.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 hours ago

    Along with the other excellent comments. I would like to emphasize surveillance, policing, prison, war, genocide, etc.

    This ties into the comment on accountability. I think that violence will continue to be a primary application of impunity.

    I don’t think this is anything special about “AI”. It’s how technology is always used under capitalism. But it’s extremely obvious and potentially ubiquitous with “AI”.

  • cuboc
    link
    fedilink
    arrow-up
    11
    ·
    13 hours ago

    Aside from dehumanization, grifting and the environment, I read somewhere that AI and Crypto companies will trigger the next financial crisis within a year. It is expected to be larger than the 1929 one. With the formerly most powerful nation of the planet being led by a demented baboon who is played by a bunch of tech millionaires, I find this very concerning.

    • The Psyace AffectOP
      link
      fedilink
      arrow-up
      4
      ·
      13 hours ago

      We are already in a financial crisis. However, I see what you mean with this bubble of speculatory projection of investments. Basically, they expect it to be the next big thing, which of course it will if they are unstopped.

      The part that everyone is missing is that they plan to do away with money on purpose and intentionally.

  • Shelbyeileen
    link
    fedilink
    arrow-up
    9
    ·
    13 hours ago

    I like having water… my friends in Texas and California have seen water levels lowering and their bills rising. I’m in Michigan and terrified for the Great Lakes

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    20
    ·
    edit-2
    16 hours ago

    I have several problems with “Artificial Intelligence” and none of them are “main”. They are instead a morass of stupidity that makes the whole field rather suspect both academically and socially.

    1. The terminology

    There is no universally agreed-upon definition of what “intelligence” even is. Even in the very specific and most tightly-focused relevant field, cognitive science, there is no agreement on this. If you expand to include related fields like philosophy and biology, there is even less agreement.

    But sure, a bunch of computer nerds are going to make an artificial version of the thing we can’t even define yet.

    2. The grift

    I get it. In the first wave of “AI”, the progenitors of the field got a bit big for their britches and called what they were looking into “Artificial Intelligence” because they had no idea how much work had already been done in the field of intelligence, consciousness, etc. (being computer nerds and not philosophers and biologists) and they done fucked up. It happens. Call it “aspirational naming”. Minsky and co. weren’t grifters. That happened later.

    But when that field collapsed as it became increasingly clear that “symbolic logic” was not how our brains worked and was not moving us with any appreciable speed toward “intelligence” (whatever the fuck that is, c.f. #1 above), that should have been the end of it.

    There was a second generation. Who referred to their little numerical parlour tricks as “neurons” placed into “nets”. Ask any biologist who specializes in brain structures if a “neural net” with its software (or sometimes hardware) “neurons” bears any resemblance to an actual neuron and how those operate and connect. (Just get ready to cover your ears because the laughter can get very sudden and very loud…) And here’s the thing: I’m positive the people in this generation knew damned well that what they were doing had nothing whatsoever to do with real neurons, but knew that if they called them what they actually were (“Differentiable Computational Graph” or “Universal Function Approximator”) nobody with academic purse strings would open them widely enough for them to get the hookers and blow they needed. I submit that the term “neural network” was specifically coined to disguise the truth of what they were in a fraudulent attempt to get more funding by wowing the non-technical (like me) into responding “Oh, my! They’re building actual brains! MOAR MUNNEEE!”

    And there was continued histories of this in wave after winter after wave after winter. Genetic algorithms don’t work like genes. Ant colony optimizations don’t work like ant colonies. Even machine learning isn’t learning in any meaningful sense. Are they useful? HELL YES! They’re useful in a wide variety of niches. But none of them have anything to do with “intelligence” (whatever the fuck that is, c.f. #1 above) and the terminology is, in my considered opinion, very much a sales snowjob for the credulously bureaucratic.

    So now we have “large language models”, which are probably the most honestly-named technologies (along with the various things like “diffusion” techniques for pictures, etc.) of anything in “artificial intelligence”, but which use a different grifts to sucker the credulous: the grift of conflating fluency with intellect and knowledge. (The reason for this conflation is obvious: we’ve never had examples before of fluency detached from intellect so we’re wired to confuse the two.)

    3. The point you bring up

    A whole lot of money and effort has been expended on a pipe dream: making an artificial version of something we can’t even define. (C.f. #1 above.) That money and effort would have been far better spent on fighting climate change, education, health care, and a bewildering variety of other things that would actually do humanity some good.

    But grifters gotta grift.

    • fullsquare@awful.systems
      link
      fedilink
      arrow-up
      3
      ·
      14 hours ago

      the thing is in specific ways much older, i think that if you want to stretch it the first people you can credibly accuse of being ai bros were alchemists trying to cook homunculus, or trying to get infinite knowledge out if philosophers’ stone

    • snooggums@piefed.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      Beyond the money itself there is so much pollution and waste associated with the shitty jam an LLM into everything approach.

  • zqwzzle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    16 hours ago

    It’s another wealth transfer; the wealth hoarding class wants to further devalue the skills of working class people.

  • besselj@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    16 hours ago

    Accountability. When the AI causes measurable harm to an individual or group, will the owner of the service be held accountable in the same way that an individual would be? I doubt it. The corporations will just pay settlements and keep doing everything in their power to avoid being held responsible for what their AI says or does. People are too trusting of corporations and anthropomorphized software.

    • The Psyace AffectOP
      link
      fedilink
      arrow-up
      5
      ·
      16 hours ago

      I expect that the AI will not have any form of accountability put on it, and the companies won’t be held liable. Currently, even without AI, they already pay settlements to get out of any wrongdoing. The AI ironically would probably be given human rights. All of the benefits, constitutionally, with none of the consequences.

  • over_clox
    link
    fedilink
    arrow-up
    6
    ·
    16 hours ago

    I feel that AI really is like a two three edged sword.

    • Used responsibly and double and triple checked by humans, it can do some amazing things.
    • Used irresponsibly it can have millions of people under the false impression that someone said or did something that never happened.
    • Used for obvious silliness and not trying to deceive people, well okay, everyone needs a laugh now and then.

    For the most part, I think AI should never be used for serious purposes, but when it is, the results need to be triple checked by actual human experts. Fuckall with the deepfakes and slop though.

    • The Psyace AffectOP
      link
      fedilink
      arrow-up
      2
      ·
      16 hours ago

      If humans double and triple checked the artificial intelligence, this would only be possible if it was traditional AI models and not AGI, or something that is trying to be good at everything. When it’s good at everything, you can’t double check the models. It’s not possible. There’s no way to tell what it’s doing behind the scenes.

      • over_clox
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        16 hours ago

        I once wrote a rather low resolution ‘assisted’ OCR program to scan in and convert image documents to plain text. It worked like a crude AI of sorts, and would make mistakes as always, but if the 16x16 grid of low-res pixels fell below a certainty threshold, it would pause and ask the user to type the character(s) it was observing.

        I seeded it with a fairly generic font for reference, to start with anyways. But as the program progressed, the model would adapt the recognized font to the point that by like 1/4 of the way through the scanned document, it had a much higher certainty factor and could more or less continue without much human intervention.

        By the time the program finished, I could also dump its internal adjusted memory to see what the text font looked like to the computer. If I scaled that up way past 16x16 potato blocks, I could upscale the concept to detect exactly what font was used. Assuming I had a rip of every font that’s ever existed anyways…

        In the right hands (with models that actually make sense), it’s totally possible to know exactly what AI is doing behind the scenes. But still needs human eyes to double and triple check the results before getting the Stamp Of Approval™

  • Asafum@feddit.nl
    link
    fedilink
    arrow-up
    4
    ·
    16 hours ago

    My main concerns are resources and labor. We’re already struggling with getting off of dirty fuels and now we have this enormous demand to meet, plus water reserves are going to become more and more stressed. The interests of capital always wins over the needs of people especially in America where quite literally our economy now hinges on the success of AI companies.

    As for labor, again from the US perspective, human life has no “value” to the ruling class if it cannot labor and produce a profit for others. As AI takes more and more jobs, people will not be taken care of, they’ll be left to suffer in poverty. Eventually the AI systems will become better integrated into better robotic systems and then the job loss spreads way past “office work” and an enormous amount of jobs are lost.

    I don’t have a very bright view for the future to be honest. The interests of capital always win. :/