• 30 Posts
  • 687 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle
  • Proxmox is based on debian, with it’s own virtualization packages and system services that do something very similar to what libvirt does.

    Libvirr + virt manager also uses qemu kvm as it’s underlying virtual machine software, meaning performance will be identical.

    Although perhaps there will be a tiny difference due to libvirt’s use of the more performant spice for graphics vs proxmox’s novnc but it doesn’t really matter.

    The true minimal setup is to just use qemu kvm directly, but the virtual machine performance will be the same as libvirt, in exchange for a very small reduction in overhead.


  • If this is the thread you are referring to, this is far from “vitreol” or being “combatative”. You said it yourself, there are two others users testing and were able to reproduce your issue. And the person who was unable to reproduce your issue is still being helpful, because we confirm that their specific setup (powerful server + ubuntu snap) doesn’t encounter this issue. Of course they are not going to offer any further troubleshooting advice, what can they do? They aren’t encountering the issue so they can’t really help you in the hands on way the other commenters are. So instead they pointed you to some other places you could ask for further troubleshooting. “I can’t help you” is very, very different than “fuck off!”.

    Look, I get it. You’re tired, and probably frustrated. Just take a break or something. It’s clear that making this post didn’t advance your goal of troubleshooting this issue.

    Now, let me take a crack at it. Nextcloud is one of like 3 software that I know off, off the top of my head that can encounter performance issues when it is deployed in a manner that doesn’t include an in memory cache of some sort. It looks like you were trying to install redis here, although I don’t know how far you got, or if this was even the same nextcloud setup?

    But many people frequently encounter performance issues with the manual install, that they don’t encounter with “distributions” of Nextcloud that include Redis or other performance optimizations like the docker-AIO installl… or the Snap version that the person who wasn’t encountering the issue used. So yes. Knowing that someone doesn’t encounter an issue is useful information to me.

    Can you confirm what deployment method your hosting provider is using for nextcloud? Both here and in the original thread, that would isolate a lot of variables, and it would allow people to give you more precise advice on debugging the service, since debugging a docker or snap version will be different from debugging a raw LAMP stack install. Right now, we are essentially flying blind, so it’s no wonder that no progress has been made.

    have you considered contacting hosting support?

    Of course not. I came to the available discussion forum to investigate a situation which may or may not be a flaw, and is clearly not a hosting company’s responsibility. Besides the fact that they would likely tell me exactly that if I get a response at all, I always explore all other avenues before opening tickets and GitHub issues.

    Lmao. You pay them for a service of seamless nextcloud, and that includes support. But to be blunt, we can’t really help you if we don’t know what the hosting provider is doing.

    If this is a performance optimization problem, you may not have the privileges on the server you would need to finetune nextcloud in order to fix this.

    If this is a bug, you can’t really see granular logs from the nextcloud host, same thing.

    Idk what to tell you. You are trying to manage managed nextcloud like it is selfhosted nextcloud and you are getting frustrated when people tell you that you might not have the under the hood access needed to fix what you want to fix.




  • To copy what I said when this was posted in another community:

    The png didn’t do shit. Users where compromised by a malicious extension.

    Steganagrophy (hiding data in a png) is a non issue and cannot do anything independently. It is also impossible to really stop.

    Which is probably why the cybersecurity news cycle likes to pretend that steganagrophy is a risk on it’s own, so that they can sell you products to stop this “theat”.

    I hate the clickbait title is what I’m trying to say. But the writeup is pretty interesting.

    Although the real solution to this problem is probably only letting users install known safe extensions from an allowlist, instead of “pay us for consulting!”.



  • The png didn’t do shit. Users where compromised by a malicious extension.

    Steganagrophy (hiding data in a png) is a non issue and cannot do anything independently. It is also impossible to really stop.

    Which is probably why the cybersecurity news cycle likes to pretend that steganagrophy is a risk on it’s own, so that they can sell you products to stop this “theat”.

    I hate the clickbait title is what I’m trying to say. But the writeup is pretty interesting.

    Although the real solution to this problem is probably only letting users install known safe extensions from an allowlist, instead of “pay us for consulting!”.



  • From flahubs docs: https://docs.flathub.org/blog/app-safety-layered-approach-source-to-user#reproducibility--auditability

    The build itself is signed by Flathub’s key, and Flatpak/OSTree verify these signatures when installing and updating apps.

    This does not seem to be optional or up to the control of each developer or publisher who is using the flathub repos.

    Of course, unless you mean packages via flatpak in general?

    Hmmm, this is where my research leads me.

    https://docs.flatpak.org/en/latest/flatpak-builder.html#signing

    Though it generally isn’t recommended, it is possible not to use GPG verification. In this case, the --no-gpg-verify option should be used when adding the repository. Note that it is necessary to become root in order to update a repository that does not have GPG verification enabled.

    Going further, I found a relevant github issue where a user is encountering an issue where flatpak is refusing to install a package that is not signed, and the user is asking for a cli flag to bypass this block.

    I don’t really see how this is any different from apt refusing to install unsigned packages by default but allowing a command line flag (--allow-unauthenticated) as an escape hatch.

    To be really pedantic, apt key signing is also optional, it’s just that apt is configured to refuse to install unsigned packages by default. So therefor all major repos sign their packages with GPG keys. Flatpak appears to follow this exact same model.




  • sandboxing is not the best practice on Linux… So I’m better off with Qubes than with Secureblue

    No, no, no.

    It’s no that sandboxing is the best practice, it’s just that attempting to “stack” linux sandboxes is mostly ineffective. If I run kvm inside xen, I get more security. If I run a linux container inside a linux container, I only get the benefit of one layer. But linux sandboxes are good practice.

    I do agree that secureblue sucks, but I don’t understand your focus on Qubes. To elaborate on my criticisms let me explain, with a reply to this comment:

    Many CVE’s for Xen were discovered and patched by the Qubes folks, so that’s a good thing…

    If really, really care about security, it’s not enough to “find and patch CVE’s”. The architecture of the software must be organized in such a way that certain classes of vulnerabilities are impossible — so no CVE’s exist in the first place. Having a lack of separation between different privilege levels turns a normal bug into a critical security issue.

    Xen having so many CVE’s shows that is has some clear architectural flaws, and that despite technically being a “microkernel”, the isolation between the components is not enough to prevent privilege isolation flaws.

    Gvisor having very few CVE’s over it’s lifespan shows it has a better architecture. Same for OpenBSD — despite having a “monolithic” kernel, I would trust openbsd more in many cases (will elaborate later).

    Now, let’s talk about threat model. Personally, I don’t really understand your fears in this thread. You visited a site, got literally jumpscared (not even phised), and are now looking at qubes? No actual exploit was done.

    You need to understand that the sandboxing that browsers use is one of the most advanced in existence currently. Browser escapes are mostly impossible… mostly.

    In addition, you need to know that excluding openbsd, gvisor, and a few other projects almost all other projects will have a regular outpouring of CVE’s at varying rates, depending on how well they are architectured.

    Xen is one of those projects. Linux is one of those projects. Your browser is one of those projects. Although I consider Linux a tier below in security, I consider Xen and browsers to exist at a similar tier of security.

    What I’m trying to say, is that any organization/entity that is keeping a browser sandbox escape, will most definitely have a Linux privilege escalation vulnerability, and will probably also have a Xen escape and escalation vulnerability.

    The qube with the browser might get compromised, but dom0 would stay safely offline, that’s my ideal, not the utopic notion of never possibly getting attacked and hacked.

    This is just false. Anybody who is able to do the very difficult task of compromising you through the browser will probably also be able to punch through Xen.

    not the utopic notion of never possibly getting attacked and hacked.

    This is true actually. Browser exploits are worth millions or even tens of millions of dollars. And they can only really be used a few times before someone catches them and reports them so that they are patched.

    Why would someone spend tens of millions of dollars to compromise you? Do you have information worth millions of dollars on your computer? It’s not a “utopic notion”, it’s being realistic.

    If you want maximum browser security, disable javascript use chromium on openbsd. Chromium has slightly stronger sandboxing than firefox, although chromium mostly outputs CVE’s at the same rate as firefox. Where it really shines, is when combined with Openbsd’s sandboxing (or grapheneos’ for phones).

    Sure, you can run Xen under that setup. But there will be no benefit, you already have a stronger layer in front of Xen.

    TLDR: Your entire security setup is only actually as strong as your strongest layer/shield. Adding more layers doesn’t really offer a benefit. But trying to add stronger layers is a waste of your time because you aren’t a target.




  • to answer your first question, kind of. Gvisor (by google btw) uses the linux kernels sandboxing to sandbox the gvisor process itself.

    Distrobox also uses the linux kernels sandboxing, which is how linux based containers work.

    Due to issues with the attack surface of the linux’s kernels sandboxing components, the ability to create sandboxing or containers inside sandboxes or containers is usually restricted.

    What this means is that to use gvisor inside docker/podman (distrobox) you must either loosen the (kinda nonexistent) distrobox sandbox, or you must disable gvisors sandboxing that it applies to itself. You lose the benefit, and you would be better off just using gvisor alone.

    It’s complicated, but basically the linux’s kernels containers/sandboxing features can’t really be “stacked”.