A while ago I stumbled upon a simple yet surprisingly interesting question on Hacker News, which seemed to have some people (re-)evaluate their current setups and look for ways to improve. Hence I figured that it might be an interesting subject to cover.
The question that I stumbled upon on Hacker News read: "How do you trust that your personal machine is not compromised?"
“Compromised” meaning that malware hasn’t been installed or that it’s not being accessed by malicious third parties. This could be at the BIOS, firmware, OS, app or any other other level.
Funny enough, around the time the post showed up on Hacker News, different people reached out to me, asking things that went into similar directions, for example …
[…] What security measures do you recommend on Linux? I have […]
… and …
[…] I wanted to know if you have any tips for securing your Gentoo installation. […]
… or …
[…] and do you use a Yubikey? […]
It seemed the question on Hacker News had some people (re-)evaluate their current setup and look for ways to improve shortcomings – which is great! Hence, I thought it might make sense to address a few statements that I found stuck out in the post, as well as the questions I was receiving. Take this post as sort of a written version of the “someone irrelevant reacting to a hyped video” content on YouTube.
I’m going to try to consolidate comments into what I believe are essentially similar views and add my own take to what I believe were the most important ones.
That’s why I only use M1 Macs / ChromeOS / Windows + AppGuard / …
It really depends on how one defines being compromised in first place. If being compromised refers to a hacker gaining access to your device, then yes, these approaches are undeniably good run of the mill choices to stay safe.
If however being compromised refers to any entity besides oneself having access to some amount of private data, then using any of these things makes you virtually compromised. While for the average Joe, who has nOtHiNg tO hIdE, an Apple or Google device is a top-notch choice in terms of security, the tinfoil hat brigade might not feel overly comfortable using either of these cloud-connected devices.
Using YubiKey / Google Authenticator / … as 2FA
This one puzzled me. I don’t see the benefit and I feel this is some sort of voodoo dance people like to perform. Let me get this straight:
We’re talking about, quote, your personal machine. It’s not an SSH server on the internet; It’s the physical device in front of you. I fully agree on using a second factor on any remote machines, services and websites. But explain to me what the point of having a second factor for authentication is, on a system that’s not accessible from remote? Bear in mind, it’s not disk encryption we’re talking about; People seem to refer to PAM.
How would the threat vector look? A hacker breaking into your house, sitting in front of your locked computer, typing random passwords to unlock it? Or someone doing the same thing at the local coffee shop, while you went to the barista to pick up your order?
If it’s a laptop and you’re arguing that “well what if I have it locked somewhere and someone steals it from me, goes home and figures out my 24 character passphrase without triggering PAM account deactivation?” then I’m wondering: Do you also have a second factor on your smartphone – the device that holds on to all your favorite nudies, and that’s likely protected by only a 4 to 8 number pin? Because if not, I believe that your thread model might be slightly skewed.
Unless your device offers remote login I don’t see the benefit of having a second factor solely for the purpose of authentication. If your machine is being attacked, it’s going to be by exploiting vulnerabilities in the software it runs and not by someone trying to brute force your user password. If your device gets stolen, disk encryption is going to be the barrier stopping someone from accessing your data, not PAM authentication. And in the unlikely event that your device would get stolen while running but locked, a sane PAM configuration that e.g. temporarily locks your user account or issues a shutdown after a number of failed unlock attempts is a much more sensible approach over using a second factor. I’d argue there’s a higher chance of you shooting yourself in the foot by over-complicating things with a second factor for local authentication, than there is benefit to it – just consider an unexpected issue with time synchronization, for example.
I believe the reason for some people to suggest Docker containers as a security measure probably stems from a misunderstanding of what Docker actually does. Docker containers are not isolated VMs, but they are regular processes that run within the same kernel that the rest of the host system runs on. While Docker does (more or less) restrict a process from, let’s say, accessing files outside its own sandbox, the software that’s being run inside a container is still just another process in the kernel. If the software running inside the container was to successfully exploit a kernel (or even hardware) vulnerability that would grant it advanced privileges, it would still be able to for example take a peek into that interesting Firefox process of yours.
Qubes OS, Qubes OS, Qubes OS!!!!
Yes and no. Yes, Qubes OS, unlike Docker, does actual isolation and focuses heavily on the idea of compartmentalization of every component – be it the browser, the firewall, and even the USB stack. Yet, let’s not forget that at the end of the day these qubes nevertheless run on Xen, a hypervisor that also has its fair share of vulnerabilities.
Indeed, Qubes might be the best thing we got right now for the truly paranoid, and while there are scenarios in which it makes sense, it’s certainly not something a wide range of even advanced users could realistically use on a day to day basis.
Obviously someone like Ed Snow runs Qubes OS despite its drawbacks, but I’d be surprised to find regular people who are consistently – and more important productively – using Qubes OS as their daily driver. I’d argue that one might need to have an unusually high tolerance for quirky stuff and latency, as well as work in a field where the advantages of the increased security of Qubes outweigh its drawbacks, like slow startup for individual applications, the lack of GPU acceleration, the increased memory usage, the overhead in configuration for just running stuff, the learning curve, etc.
With Qubes we’re still stuck with the iron triangle, and what Qubes ultimately does is answer the question asked on Hacker News by effectively saying that they don’t know either whether your system might be compromised or not, but if it is, it’s hopefully only part of the system and it might be fixable by destroying and recreating the compromised VM.
Wipe and reformat from a known clean image regularly
Now that’s what I would call the Qubes OS approach to non Qubes operating systems. Remember that malicious software might very well hide in places that are being backed up and recovered on a freshly installed system. Or it might hide in layers well below the operating system, rendering this approach useless.
put $10 in a crypto wallet and keep the seed phrase on your desktop
I keep a few £ in bitcoin stashed on it. If it ever disappears, I’ll know.
Noone has drained my crypto from my wallets yet.
This is probably meant to be sort of a honeypot, however it’s somewhat unreasonable to think that someone who went through the effort of gaining remote access to your machine will trigger a transfer of $10 in cryptocurrency and be done with it. In worst case, using crypto as a honeypot might just motivate an attacker to scramble for more, seeing that their target possesses knowledge in that area. Also, most malware targeting crypto seems to rather attack mobile devices rather than computers. Crypto might also not work as simple intrusion detection system, because maybe financial gain isn’t even the attacker’s primary target – unless there’s a lot of crypto to be stolen. I’d argue that there are significantly better honey tokens than crypto.
Okay, now we’re talking! This is where the conversation gets interesting, because that’s the point that basically affects all the answers given in that post: Hardware and low-level firmware.
The BIOS (read: EFI) has always been the one component that you as a users would need to blindly trust the hardware vendor to get it right and not do shady things, because the firmware used there is proprietary, not open source and usually poorly documented. It has happened in the past, that malware was found on factory motherboards of different vendors, and it looks like especially UEFI firmware is a preferred target for a growing number of state-sponsored hacker groups.
While there are measures in place to make tampering with the proprietary
firmware harder – like Secure Boot – the core issue remains nevertheless: With
closed source firmware the user must essentially trust the vendor, since nobody
else can vet the code that’s being executed at that level. If a vendor should
decide to voluntarily include a backdoor into their components,
Secure Boot won’t help you. The only thing that solves this issue is having
actual open firmware – something System76 have been
doing for a while now, and something AMD also seems to have an interest
in. Unfortunately we’re not there yet and it will still be a
long road until enough hardware with open firmware will become readily
available, especially to non-technical users.
Ultimately though, whatever solution you might choose, weather that’s a SELinux installation, Qubes OS running on a regular consumer laptop or a proprietary platform, you will never be able to say with 100% certainty that your local machine is not compromised. That is, because as of today there will always be some pieces of hardware and/or software that you might not have full control over and that might do shady things or have 0 days. Especially when we’re talking state-sponsored cyber crime, there is no chance for even more advanced users to confidently say that their machines are not compromised.
Besides, security considerations shouldn’t only involve the personal computer, but the overall network infrastructure, as well as other devices with access to it, like for example smartphones. However, not even a multi-layered approach – or air-gapping – is going to be a silver bullet on its own.
My own approach to this is minimizing my reliance on closed source software – and ultimately hardware – as much as possible, and trying to build the software that I need from the author’s original sources myself. Additionally I make sure to enable protections, e.g. inside the Linux kernel, which I also build myself, with solely the modules my hardware requires, as well as by making extensive use of the hardened flag on Gentoo Linux – my distribution of choice. I keep software at a minimum, with only around 1100 packages installed at all times on my Gentoo machine – most of which are libraries. And I use a similar approach on my phone. I run things like Tor Browser in dedicated KVM VMs (using Tails) and generally use VMs and even a dedicated device for software that I wouldn’t trust to run natively on my primary machine or for software that is too much of a PITA to compile myself on Gentoo.
Apart from these things, I use encryption at rest for all storage and I have a decent backup procedure in place that makes sure my data is safely replicated in case of it being destroyed by malware. I do not use cloud services for backups and instead rely on my own physical infrastructure. In addition, I encrypt especially critical data using various mechanisms (e.g. GPG), so that even during runtime these files are secured and require decryption and re-encryption to see/work with them.
Driving down the road further into bat country, I keep honey tokens and honeypots around that might be of interest for potential attackers and that would let me know in case odd things might happen. I have implemented intrusion detection measures that would notify me if someone or something was to execute specific things. There are plenty of booby traps spread throughout the system, which an attacker might accidentally run into.
From a hardware perspective, my computer does not have an integrated camera or
microphone. I used peripherals for that, which I keep physically disconnected
when I don’t need them. On smartphones I make use of software kill switches, as
well as stickers to put on cameras, especially front facing ones. The most
important input device on my computer – the keyboard – is connected via USB,
not Bluetooth. Bluetooth in general is off on most of my devices, unless I have
a reason to explicitly turn it on. My computer is connected to the network using
Ethernet cables. Solely mobile devices use the 5GHz WiFi. MacOS and iOS devices
on the network are isolated from everything and only allowed to access the
internet via port
I’m ultimately looking forward to own devices that run open firmware, like some System76 do, and have hardware kill switches for individual modules, like the Murena 2 smartphone, or the Purism devices have. Unfortunately it is still a long way to go. I’m not suggesting that open hardware, firmware and software are invincible, but they at least greatly reduce the risk of malevolent actors trying to intentionally include and hide backdoors, and they allow people to thoroughly investigate their own devices in case of a suspected attack.
To return to the actual question at hand, one commenter on Hacker News probably put it best:
Great question. I don’t anymore. […] today it’s not about striving for zero risk (for 99.99 of people), but picking the ratio of overhead and risk you’re ok with.
Here is a list of further resources to look into, if you’re interested to learn more about securing your (Linux/Android) devices and your local area network:
journal ] · tags [ open-source security privacy ]
published [ ] · updated [ ]