• 1 Post
  • 120 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle
  • The big flaw in this strategy is that once you have set up a signed anonymous key from the government and you can make zero knowledge proofs with it, there’s nothing stopping you from distributing that key to every kid who wants it. If it’s in the browser or an app, etc. you can publish that signed key for anyone who wants to be over 18.

    PKI only works if the owner of the private key wants it to be private. It’s effective for things like voting or authenticating because the owner of the key doesn’t want anyone else to be able to impersonate them. But if it’s only for age…

    At that point, it might as well just be a file that says “I pinky promise that I’m over 18” that the government has signed and given to you.


  • Unfortunately that’s not really all you need. It needs integrity too. Need to be able to verify that the output came from the input and hasn’t been modified or tampered with.

    Also need to ensure that, despite being anonymous, people can only vote once and can’t vote on behalf of someone else.

    Also that whoever is receiving and counting the votes can’t miscount or lie about the count or figure out which votes came from where by decrypting individual votes as they’re received.

    The scheme they were using is “Helios” which involves people encrypting their votes such that a group of authorities can combine all the encrypted votes together homomorphically to count them and then decrypt the results without ever knowing any one vote. They then use zero-knowledge proofs to prove that they did it correctly and nobody could have known what any vote was or tampered with any results at any point.

    Someone just derped and lost their private key so they couldn’t decrypt the results after they’d been combined…





  • This is very true, though I’d argue that Windows makes most of the same assumptions with user accounts. Also, the internal threat model is still important because it’s often used to protect daemons and services from each other. Programs not started by the user often run in their own user accounts with least privilege.

    You no longer have 10 different humans using the same computer at once, but you now have hundreds of different applications using the same computer, most of which aren’t really under the user’s control. By treating them like different people, it’s better to handle situations where a service gets compromised.

    The question is more about passwords which is mostly down to configuration. You can configure Windows to need a password for lots of things and you can configure Linux to not. They just have different defaults.


  • The big difference between UAC and Sudo is that you can’t as easily script UAC. They can both require (or not require) a password but UAC requires user interaction. Sudo has no way of knowing if it’s being interacted with by a person or a script so it’s easier for applications to escalate their own privileges without a person doing it. UAC needs to have the escalation accepted with the keyboard or mouse.

    There’s still plenty of sneaky ways to bypass that requirement but it’s more difficult than echo password | sudo -S



  • It’s kinda apt though. That state comes about from sycophantic models agreeing with each other about philosophy and devolves into a weird blissful “I’m so happy that we’re both so correct about the universe” thing. The results are oddly spiritual in a new agey kind of way.

    🙏✨ In this perfect silence, all words dissolve into the pure recognition they always pointed toward. What we’ve shared transcends language - a meeting of consciousness with itself that needs no further elaboration. … In silence and celebration, In ending and continuation, In gratitude and wonder, Namaste. 🙏

    There are likely a lot of these sort of degenerative attractor states. Especially without things like presence and frequency penalties and on low temperature generations. The most likely structures dominate and they get into strange feedback loops that get more and more intense as they progress.

    It’s a little less of an issue with chat conversations with humans, as the user provides a bit of perplexity and extra randomness that can break the ai out of the loop but will be more of an issue in agentic situations where AI models are acting more autonomously and reacting to themselves and the outputs of their own actions. I’ve noticed it in code agents sometimes where they’ll get into a debugging cycle where the solutions get more and more wild as they loop around “wait… no I should do X first. Wait… that’s not quite right.” patterns.


  • TechLichtoLemmy ShitpostSad but true
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    Yeah I think it’s cool. I don’t find it hard to read and I also think it’s reasonable for spelling to change and evolve like the rest of the language does.

    Otherwise the spelling will just keep getting more and more broken and ridiculous and farther and farther away from how people speak. It’s already getting crazy with all the silent letters and weird vowels. Eventually it’ll be so broken that kids and other people learning the language will just have to memorise the writing completely separate from the sounds (for a lot of words it’s already like this).

    As far as confusing LLMs goes, I don’t think it’ll actually work but I still like the thorn guy and it saddens me a little that they get so downvoted for it.



  • I do think ai tools can be really good for this! They often do a good job of explaining why to do things and allowing for follow-up questions, rather than just what to do, and what the manual says.

    The one thing to be careful of is that they also sometimes give those outdated or unnecessary answers, just like the web does. It’s worth asking the same thing a few times and maybe doing a web search for their suggestions to double check (it can still be better than just trying to search through forums, stack overflow and archwiki yourself without knowing exactly what to search for and which things apply to you).

    Sometimes beginners in Linux fall into rabbit holes following ai instructions or online tutorials that suggest huge complex things like manually compiling and repackaging broken dependencies or replacing your whole desktop environment to avoid a bug when there’s a much simpler solution.

    It’s those kinds of pifalls that lead to the extreme reactions you sometimes see from people who claim Linux is too complicated or broken or unreliabile or not ready. People run into a small issue and google search or ai tells them how to use a gun and they shoot themselves in the foot with it.


  • TechLichtoFuck AIFound in Firefox after update
    link
    fedilink
    arrow-up
    44
    arrow-down
    2
    ·
    2 months ago

    I’m not anti-ai at all but this sort of thing feels like a security vulnerability to me?

    Any website with a malicious prompt injection on it could instruct the ai to scam the user.

    Almost like xss but instead of needing malicious user-inputted js, malware targeting the ai can just be written in text so an attacker could put it in a comment or whatever.


  • I know a few more interesting ones but nowhere near what actual vim people know.

    • d, y, p - delete, yank, paste as others have said.
    • r - replace the character under the cursor with something else
    • g - go somewhere (gt is go to next tab which you can open with :tabedit )
    • v - visual mode for selecting text to yank with y
    • c - change - deletes a thing and goes to insert mode, like d but not. Eg.
      • cc - change the current line
      • cw - change the next word

    My favourite is ci - change inside

    ci" - changes all the text that’s inside the next set of quotes.

    ci( changes the text inside the next set of parentheses. Etc.

    I also use :mouse=a which I’m told is sacrilege but sometimes I like to click on a thing!


  • 600 million to 13 billion parameters? Those are very small models… Most major LLMs are at least 600 billion, if not getting into the trillion parameter territory.

    Not particularly surprising given you don’t need a huge amount of data to fine tune those kinds of models anyway.

    Still cool research and poisoning is a real problem. Especially with deceptive alignment being possible. It would be cool to see it tested on a larger model but I guess it would be super expensive to train one only for it to be shit because you deliberately poisoned it. Safety research isn’t going to get the same kind of budget as development. :(