Security from obscurity:
the good, the bad, and the uglifiers
I want to talk about a conversation that a colleague and I had the other day. The other day, in this instance, meaning some time between 2 weeks and 2 years ago. He said something to which I responded, ‘sounds like security from obscurity to me’. This began the conversation, through which he brought up several examples he considered valid use cases of obscurity improving a security posture; meanwhile I dug my heels in and stated with full confidence that security is never increased by obscurity. This is a hill I will die on happily and it occurs to me that it’s probably worth expounding a bit on how I got to be on said hill.
How it started
First, let’s talk a bit about the origin of the “Security through obscurity” argument. It started back in 1851 when Alfred Charles Hobbs, a locksmith, was giving lessons on how to pick state of the art locks. Some people expressed concerns that by educating on this topic he was aiding criminals to break through the best defenses available at the time. To these concerns he responded with the ever quotable “Rogues are very keen in their profession, and know already much more than we can teach them.” The essential idea being you shouldn’t try to keep the mechanism that enables your security a secret (or at the very least you should not rely on it being secret) because criminals are likely to figure out how it works and, if knowing how it works breaks it, it was never secure in the first place. This argument can, in my opinion, be applied today like infrastructure diagrams and choices of encryption. These things, I think, should not be treated as secret material because if their exposure compromises your security then you were never really secure in the first place.
How it’s going
These days the phrase comes up in a lot of security focused conversations and is often used as a criticism for a proposed idea. The question is whether it is true every time that obscurity should not be deployed for security. My colleague brought up a common example: The idea of moving a port number, something trivial like an SSH server using port X instead of 22. Some people have tried it and say that such a measure decreases probing on the SSH port from thousands per day to a mere handful. To this I respond with two points. Firstly, the handful of attempts that find the new port are probably the only ones that were even mildly threatening in the first place. So, the feeling of satisfaction by reducing the “risk” is probably disproportionate to the actual reduction of risk (which itself can be bad for a security posture). Secondly, I argue that if finding your SSH port brings an attacker even 1% of the way to breaching your SSH server then you have done something very wrong in the way you have set up that SSH server. The argument in favour of moving the port is the consequent reduction of the noise to signal ratio for your monitoring, which, I mean, ok, fine… but, if you have a quality SIEM (Security Information and Event Management) setup, then it shouldn’t make any difference except for maybe saving you a few cents on disk space.
Generalising the example
The SSH port brings us to a broader argument regarding obscurity as a security measure, which is the question of camouflage. Tanks are built to withstand attacks, but that is no reason to paint them with a bright red bullseye. In my mind there is room for a conversation about whether camouflage is a subset of security or a companion to it. Regardless I think it’s worth noting that this mentality can often lead to compromising your defences rather than enhancing them. My favourite example is key-boxes, the devices you leave outside your home with a spare key for guests to access. They commonly come in two different basic models, one is typically screwed into the wall next to your door with some form of PIN access. The other is shaped like a rock and designed to be left in a place where it is easy to locate if you know it is there, but hard to find if you don’t. Now, I should point out that neither of these are very good security measures, but I would argue that of the two the one not trying to hide is better. Despite being easy to simply crowbar it off the wall and take it with you to cut the key out (or brute force the pin if you’re the patient kind) at least with this model you are aware of it and will notice that the key is compromised. The hidden version however (even if it also had a PIN lock), is liable to be seen being used once, nicked while you’re not looking, and never thought of for weeks while an attacker has access to your house. Key rotation is an important aspect of security and having visibility over compromise is more important than obscuring your mechanics.
Getting into the ugly parts
This visibility over your security extends beyond observation of compromise and into the final example that was brought up in the original conversation: uglifiers. For those who don’t know the term, uglifiers are a relatively modern web trend wherein you obfuscate your frontend code to make it harder to read. This is similar to minimisation but different in that the desired outcome isn’t saving data over the wire but to enhance your security. The idea being that, if people can read your front end code, they can possibly attack it. This is where we get to the part of the conversation that I start to really feel like my hill is worth dying on. See my problem with uglifiers is this: you no longer know what your website does. Supply chain management is a major security concern, and trusting a third party library to alter your code is a risk. Additionally, making it hard to read results in one of two outcomes. Maybe you have a codebase that you can no longer read, and you have to trust that nothing malicious has been added to it. Alternatively, you can reverse engineer the code back to its original state, in which case the uglifier was pointless in the first place. This to me is where the arguments against Security from Obscurity become important: in an attempt to add a little spice to the top and obscure the code you are actually taking away from your security, and it is (or at least was for a while, it seems to be less common again now) a commonly accepted part of “securing” your web page.
So what am I trying to get at here? Well, it seems to me that complexity is antithetical to security, and that obscurity goes hand in hand with complexity. If you could convince me that you in no way weakened any of your security mechanisms but added obscurity on top, I’d perhaps consider it acceptable practice. I don’t believe this can be comprehensively demonstrated though, especially not as part of a holistic solution, whilst the increase in complexity is apparent and self evidently problematic. By accepting obscurity as a security measure I firmly believe we are letting bad practice into our daily lives and, for that reason, I will continue to call out any controls I feel are “security” from obscurity.
Let me take a moment to talk a bit about something I touched on above. One of the few things I admitted during the conversation was that the noise to signal ratio improvements can be good. It’s a real problem in the cyber security industry, where we burn out young talent by getting grads to sift through mountains of logs looking for potential breaches. Luckily these days work is being done to make that process less painful and one of the coolest tools for that is Elastic SIEM. It scales to fit almost every use case, has out of the box detections to help keep you safe, and is built on open source technologies (something I’m likely to write another post on in the near future). If you’re interested in how the Elastic Stack may benefit you and your workplace feel free to reach out to Skilledfield for more information and a free assessment, we’re a local ANZ regional partner for Elastic and would be happy to get you started on the journey to a better SIEM implementation.
Written by: Keone Martin