Heads up: So, this turned out to be very long. Longer than I anticipated before I started writing. Also, I'm not completely certain whether this shouldn't have gone into user space programing. Since it's still very much tied into base, I'll leave it here for the moment and ask the moderators to move, if they think it's misplaced..
During recent systems maintenance, I stumbled over the usual self-inflicted hurdles like kern.securelevel=3, particularly in conjunction with cloud hosts, that do not offer any KVM console. Yes, believe it or not, those ssh-only boxes still exist.
Obviously, there's multiple ways of fixing this with a simple shell script. However, one day I sat down and began doing some tinkering and built myself a C program to help me simplify things. One thing led to another and now I have a suite of tools that are built on top of the base openssl library's private/public key functionality to offer some IDS functionality.
I've been working on this for a few days and am at a point where I'm wondering whether I've constructed something that might actually not be safe at all and that's why I'd like to get some feedback. Obviously - security by obfuscation or obscurity isn't really security, so I rather like to put some spotlight on it.
Basically, my tool set hinges on a private and public key, protected by
These keys are then used to create sha512 hashes of a desired file (i.e. an executable) and sign it. This process
Oh, and I almost forgot: my utility even goes a step further, assess whether this is an ELF binary and if it is, it also attempts to check hashes for all libraries it depends on. If the file has a shebang, it attempts checking the interpreter's hash.
Said keys can now also be used to write to an encrypted command file that is checked during boot time to set an approved secure level. So, if you are root and have the private key's secret, you can write an instruction for booting into a less secure securelevel during the next reboot. I even built it with a time stamp that allows the instruction to expire past a set time - so if I forget to reboot after changing the securelevel, I'm not "surprised" during the next reboot at a later time.
However, I'm uncertain whether the overall approach is at all beneficial to a system's security stance or if this is just "theatrics"? I'm thinking this looks like "poor man's code signing".
I was considering wrapping things into a kernel module so I could automatically check signatures in advance to running any executables. Now, after some short digging, I find OpenBSD has https://man.openbsd.org/signify, which appears to be close in functionality? However, my tools are all wrapped into capcisum, so I imagine that should give some additional layer of security against file content based attacks?
Also, I'm wondering whether I might have missed a base utility that could have all done it already? After all, the "keep it simple" philosophy has proven pretty good and I don't want to make things complicated for complication's sake.
I probably already put waaaaay to much effort into this thing. So originally I was considering posting this whole thing on github, but now after writing this and looking at OpenBSD I wonder whether it wouldn't be smarter toport signify build on top of security/signify instead of my own crazy invention. Then again, mine seems to have a few more tricks up its sleeve, like checking dependencies?
I would love to get some feedback from you guys. Is this useful? Is this completely crazy, because it would be way better in a shell script? Is this whole concept not secure at all? If you see any holes in my approach, if you have questions or want to know more details - I'd appreciate any form of input!
During recent systems maintenance, I stumbled over the usual self-inflicted hurdles like kern.securelevel=3, particularly in conjunction with cloud hosts, that do not offer any KVM console. Yes, believe it or not, those ssh-only boxes still exist.
Obviously, there's multiple ways of fixing this with a simple shell script. However, one day I sat down and began doing some tinkering and built myself a C program to help me simplify things. One thing led to another and now I have a suite of tools that are built on top of the base openssl library's private/public key functionality to offer some IDS functionality.
I've been working on this for a few days and am at a point where I'm wondering whether I've constructed something that might actually not be safe at all and that's why I'd like to get some feedback. Obviously - security by obfuscation or obscurity isn't really security, so I rather like to put some spotlight on it.
Basically, my tool set hinges on a private and public key, protected by
- a user provided secret,
- file permissions
- file flags (like noschg, nounlnk)
These keys are then used to create sha512 hashes of a desired file (i.e. an executable) and sign it. This process
- requires the user to enter the key's secret,
- it requires root privileges because it
- changes the file permissions on the cryptographically signed hash to be read only and set flags for noschg and nosunlnk.
Oh, and I almost forgot: my utility even goes a step further, assess whether this is an ELF binary and if it is, it also attempts to check hashes for all libraries it depends on. If the file has a shebang, it attempts checking the interpreter's hash.
Said keys can now also be used to write to an encrypted command file that is checked during boot time to set an approved secure level. So, if you are root and have the private key's secret, you can write an instruction for booting into a less secure securelevel during the next reboot. I even built it with a time stamp that allows the instruction to expire past a set time - so if I forget to reboot after changing the securelevel, I'm not "surprised" during the next reboot at a later time.
However, I'm uncertain whether the overall approach is at all beneficial to a system's security stance or if this is just "theatrics"? I'm thinking this looks like "poor man's code signing".
I was considering wrapping things into a kernel module so I could automatically check signatures in advance to running any executables. Now, after some short digging, I find OpenBSD has https://man.openbsd.org/signify, which appears to be close in functionality? However, my tools are all wrapped into capcisum, so I imagine that should give some additional layer of security against file content based attacks?
Also, I'm wondering whether I might have missed a base utility that could have all done it already? After all, the "keep it simple" philosophy has proven pretty good and I don't want to make things complicated for complication's sake.
I probably already put waaaaay to much effort into this thing. So originally I was considering posting this whole thing on github, but now after writing this and looking at OpenBSD I wonder whether it wouldn't be smarter to
I would love to get some feedback from you guys. Is this useful? Is this completely crazy, because it would be way better in a shell script? Is this whole concept not secure at all? If you see any holes in my approach, if you have questions or want to know more details - I'd appreciate any form of input!