• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • It’s going to depend on what types of data you are looking to protect, how you have your wifi configured, what type of sites you are accessing and whom you are willing to trust.

    To start with, if you are accessing unencypted websites (HTTP) at least part of the communications will be in the clear and open to inspection. You can mitigate this somewhat with a VPN. However, this means that you need to implicitly trust the VPN provider with a lot of data. Your communications to the VPN provider would be encrypted, though anyone observing your connection (e.g. your ISP) would be able to see that you are communicating with that VPN provider. And any communications from the VPN provider to/from the unencrypted website would also be in the clear and could be read by someone sniffing the VPN exit node’s traffic (e.g. the ISP used by the VPN exit node) Lastly, the VPN provider would have a very clear view of the traffic and be able to associate it with you.

    For encrypted websites (HTTPS), the data portion of the communications will usually be well encrypted and safe from spying (more on this in a sec). However, it may be possible for someone (e.g. your ISP) to snoop on what domains you are visiting. There are two common ways to do this. The first is via DNS requests. Any time you visit a website, your browser will need to translate the domain name to an IP address. This is what DNS does and it is not encrypted by default. Also, unless you have taken steps to avoid it, it likely your ISP is providing DNS for you. This means that they can just log all your requests, giving them a good view of the domains you are visiting. You can use something like DNS Over Https (DOH), which does encrypt DNS requests and goes to specific servers; but, this usually requires extra setup and will work regardless of using your local WiFi or a 5g/4g network. The second way to track HTTPS connections is via a process called Server Name Identification (SNI). In short, when you first connect to a web server your browser needs to tell that server which domain it wants to connect to, so that the server can send back the correct TLS certificate. This is all unencrypted and anyone inbetween (e.g. your ISP) can simply read that SNI request to know what domains you are connecting to. There are mitigations for this, specifically Encrypted Server Name Identification (ESNI), but that requires the web server to implement it, and it’s not widely used. This is also where a VPN can be useful, as the SNI request is encrypted between your system and the VPN exit node. Though again, it puts a lot of trust in the VPN provider and the VPN provider’s ISP could still see the SNI request as it leaves the VPN network. Though, associating it with you specifically might be hard.

    As for the encrypted data of an HTTPS connection, it is generally safe. So, someone might know you are visiting lemmy.ml, but they wouldn’t be able to see what communities you are reading or what you are posting. That is, unless either your device or the server are compromised. This is why mobile device malware is a common attack vector for the State level threat actors. If they have malware on your device, then all the encryption in the world ain’t helping you. There are also some attacks around forcing your browser to use weaker encryption or even the attacker compromising the server’s certificate. Though these are likely in the realm of targeted attacks and unlikely to be used on a mass scale.

    So ya, not exactly an ELI5 answer, as there isn’t a simple answer. To try and simplify, if you are visiting encrypted websites (HTTPS) and you don’t mind your mobile carrier knowing what domains you are visiting, and your device isn’t compromised, then mobile data is fine. If you would prefer your home ISP being the one tracking you, then use your home wifi. If you don’t like either of them tracking you, then you’ll need to pick a VPN provider you feel comfortable with knowing what sites you are visiting and use their software on your device. And if your device is compromised, well you’re fucked anyway and it doesn’t matter what network you are using.


  • No, a game should be what the devs decide to make. That said, it can cut off a part of the market. I’m another one of those folks who tends to avoid PvPvE games, without a dedicated PvE only side. This weekend’s Arc Raiders playtest was a good example. I read through the description on Steam and just decided, “na, I have better things to do with my time.” Unfortunately, those sorts of games tend to have a problem with griefers running about directly trying to ruin other peoples’ enjoyment. I’ll freely admit that I will never be as good as someone who is willing to put the hours into gear grinding, practice and map memorization in such a game. I just don’t enjoy that and that means I will always be at a severe disadvantage. So, why sped my time and money on such a game?

    This can lead to problem for such games, unless they have a very large player base. The Dark Souls series was a good example, which has the in-built forced PvP system, though you can kinda avoid it for solo play. And it still has a large player base. But, I’d also point out some of the the controversy around the Seamless Co-op mod for Elden Ring. When it released, the PvP players were howling from the walls about how long it made invasion queues. Since Seamless Co-op meant that the players using it were removed from the official servers, the number of easy targets to invade went way, way down. It seemed like a lot of folks like to have co-op, without the risks of invasion.

    As a longer answer to this, let me recommend two videos from Extra Credits:

    These videos provide a way to think about players and how they interact with games and each other.




  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 days ago

    Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.

    The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.

    That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.



  • And nothing of value was lost. Sure EA has published a few gems in recent years, but as a developer it’s all sports games and Battlefield. The talent isn’t at EA, it’s at the developers they have been supporting. If we’re lucky, the leveraged buyout will result in anything good owned by EA being sold off for parts and the worthless husk of EA saddled with the debt and left to go bankrupt.

    Who know, maybe the license to make Star Wars games will go somewhere that isn’t dead set on fucking it up as hard as possible to meet the Christmas season deadline.


  • I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.




  • a Bambu Labs compatible heat sink, an E3D V6 ring heater, and a heat break assembly are required
    a fan was sacrificed to mount a Big Tree Tech control board. Most everything ended up connecting to the new board without issue, except for the extruder.
    made a custom mount for the ubiquitous Orbiter extruder.
    The whole project was nicely tied up with a custom-made screen mount.

    So, other than the enclosure and print bed, what’s actually left of the original printer? It seems like the way to get a Bambu printer to run FOSS is to open the box from Bambu Labs, toss everything inside the box in the trash, drop a custom built printer in the box, and then proceed with your unboxing.