Solved SSH, security, and root privileged tasks

Greetings all,

it is generally recommended to disable root login, the justification being that bots trying ssh(1) access execute ssh root@$IP and then try to guess a password, which is easier than guessing both the username and a password. Additionally, an audit trail is generated. However, if the latter is not an issue, and one configures key-based login, is the problem and justification not going away?

The reason for asking is that, following the resommendation, I have enabled key-based login for user and use su(1) if root privilege is needed. However, for some tasks, e.g., net/rsync transferring directories/files requiring root privileges, the above-mentioned protocol fails apart.

So how does one approach such issues?

Kindest regards,

M
 
Enable root login and limit it to key-based access. There is nothing wrong with allowing root access as long as you protect it from brute-force attacks (either through a firewall (limit access to port 22), or an IPS (like fail2ban), or key-based restrictions).

It's less secure if you don't enable passwords on your private keys, but you could get away with it if you add SSH command restrictions to your keys.
 
IF ssh has no bugs, and
IF the key length of your "key-based login" is sufficient that keys can't be guessed or brute-forced, and
IF your system doesn't have any bugs that might have caused root's private key to be leaked, and
...
then allowing key-based login for root would be safe enough.

But these conditions don't hold. In particular the first one. You probably can not review the source code and theory of operations of ssh and of the underlying cryptography well enough to demonstrate to your satisfaction that there are no bugs. And we have seen bugs or vulnerabilities in SSL and SSH in the past, so we know those exist.

In the end, preventing unauthorized root access is a game of statistics. You will never get absolute security for your system. But you can make it safe enough that you are willing to trust it with your data, understanding that there is a tiny risk left, but the usefulness of the system outweighs that tiny risk. The trick here is to make a cost-benefit analysis. In your example: Direct root login with password is simply too easy to crack with brute-force attack, so the risk is too high. Personally I think that key-based access directly to root is also too high a risk, because the benefit of it is very small. On the other hand, I allow key-based access to normal user accounts (while it has the same risk), because the benefit is huge, without it the system would be unusable for remote access, which I simply need. I could implement various ways of making access even stricter (one-time passwords, hardware keys, fingerprint readers, or security tokens that display codes), and if I had enough time I would love to do those, but administering my FreeBSD system is a hobby, and I only have limited time to do it. I definitely have adding hardware keys (Yubikeys or Google's version) on my to-do list, but haven't gotten to it yet.

Allowing remote access to root via a 2-step process (first key-based access to a user account, then su or sudo or doas which requires entering a password) seems secure enough for my purposes. The real reason behind it is not that hacking this setup is impenetrably difficult, but that attempting to do so would attract attention (I would see first audit trails for lots of ssh login attempts, followed by lots of su attempts), so I could shut an attack down.

The other question is this. You say that you need remote access to root (via ssh) for things like rsync. I would challenge that assertion. In many cases, you can arrange things such that a non-privileged account can get at all the data, for example by having an internal way of moving the data to different places and different protections.

The other thing you can do is this: Do not allow root access via ssh. But allow access to another account (lets call it "remoteadmin"), which is not root. Then put user remoteadmin into a sufficient number of groups that it can read and write all required files using group permissions. This may require changing permissions of things (adding "g+rw" in places), but it may end up being more secure. Why? Because random script kiddie hacker or automated attack from Bulgaria or China or other such countries will go after the account name "root", but they don't know to attack the account name "remoteadmin".
 
Hi linux->bsd,

thank you for your thoughts.

Hi ralphbsz,

thank you for your, as always, thoughtful reply. I agree, that the conceptual issue is risk-benefit analysis; however, since I may overlook something in trying to carry such analysis, opinions of other experienced users is beneficial. I also agree that it is possible to circumvent the problem by moving the data, which, in fact, I did, and the few tasks I have currently in mind can be automated by a script. However, this is an implementation and not the conceptual issue that I am trying to understand..

I am not sure I agree with your conclusion regarding the posited conditions. Regarding the first one as shkhln already noted, it is applicable regardless of who is the user trying to connect. Regarding the second one, the available research suggests that 2048 long key is sufficiently safe till 2030 and the analysis takes into account specialized hardware available to near-unlimited budged entities. Considering that those are not in my threat space and keys of length 4086 are available, I am not very concerned. Regarding the third one, it seems that the same applies to the user key, no?

Hi shkhln,

exactly.

Kindest regards,

M
 
No, it's not that easy. You are taking an absolutist viewpoint: something either has bugs, or it is perfect. With that attitude, you will find that everything has bugs, so nothing can be trusted. Taking this to the logical conclusion, you can never store data on any computer, because no computer can be perfectly safe. So let's go back to clay tablets. While in a mathematical sense that is true, it is also not helpful.

In reality, this is a statistical process: ssh definitely has some bugs, but not very many. And the more you close it off, the fewer bugs are relevant. So closing password-based login to root definitely means that ssh has fewer bugs that are exposed, and also closing key-based login to root makes it even better. This is particularly true because we know that there are high-speed attacks against ssh being executed all the time: put a machine on the public internet (with a static IP address), open the ssh port on 22, and look at your log files, it's a zoo out there.

By the way, that makes me think of another technique to reduce the attack surface: security by obscurity. Take the machine that has login capability, and do not give it a DNS name (so it can only be found by scanning IP addresses), ideally give it only an IPv6 address (there are so many of them that scanning is much slower), don't put ssh on port 22 but on something non-obvious and random (don't use 2222 or 12345, but a nice random number, perhaps even changing every day), and so on. This alone nearly stops the dumb scanning attacks already. I just looked at the log file of one machine that I have set up like sort of like this (not IPv6, and ssh port doesn't change), and in roughly the last year (since May 2018, which is as far as my log goes back), I've had a total of 5 attempts to log in which failed; most of those were probably valid login attempts with a typo though. With an attack surface that tiny, the chance of being hit by a random scanning attack is much reduced. No, it does not give perfect security, but it is easy and effective.
 
Hi ralphbsz,

thank you again for the reply. I am not sure that it addresses my point. I did not argue that ssh does not have bugs, but that the bugs are applicable to both the regular user and the root user. Thus, I am not understanding how the bugs can distinguish between the users. Can you please amplify?

Yes, my machine does not have a DNS name and yes, I have moved the port from 22, not to believe that this is a security measure, but just trying to decrease the log file noise. I am not sure how to implement the randomness of the port, because it would have to be synchronized with the machine that wants to connect. Perhaps come algorithm running on both machines?

Kindest regards,

M
 
So how does one approach such issues?

I have an Ethernet LAN.

I'm the only one that should have access to my machines and have physical access. Remote access is disabled and enabling wi-fi in a building with 50 apartments not in my best interest.
 
You are taking an absolutist viewpoint: something either has bugs, or it is perfect. With that attitude, you will find that everything has bugs, so nothing can be trusted.

Not at all, I just don't like your examples.

If we are talking about potential buffer overflow exploits allowing remote attacker to override key/password checking code then, according to the diagram at https://security.stackexchange.com/a/115905, root auth permission check seems to be a part of a separate monitor process so, yes, disabling root access does help with that concern by limiting the consequences to regular accounts.

However, if you truly don't trust auth logic itself, I don't see why would you give any other code in OpenSSH a free pass.
 
Hi Trihexagonal,

The premise of my quandary is need for remote access, so I am at loss how your answer "I . . . have physical access. Remote access is disabled . . ." addresses my question.

Kindest regards,

M
 
I am at loss how your answer "I . . . have physical access. Remote access is disabled . . ." addresses my question.

No problem. I was at a loss of who "one" referred to in the question I responded to.

So how does one approach such issues?

In my experience, the use of "one" in that context is normally used to refer to a question in general of how others approach such issues and why I responded to how I approach the issue.

If you were referring yourself in the 3rd person as "one" and asking how you should address it then I misunderstood your question, not being how I refer to myself when asking a question.

I deffer to the use of "I' or "me" in regard to myself and reserve "one" to describe numerical values.

"How should I approach such issues" or "How would you approach such issues if you were me" my personal preference of phrase in this instance.
 
I did not argue that ssh does not have bugs, but that the bugs are applicable to both the regular user and the root user. Thus, I am not understanding how the bugs can distinguish between the users.

OK, I need to learn to write more clearly. If nobody understands my point, that probably means that I didn't explain it correctly.

To begin with: Key based authentication should in theory be safe enough; with the key lengths we're using today (I think the default is 2048 bits for RSA), the possibility that it can be brute-force cracked is nonexistent.

But bugs exist. Those bugs don't typically distinguish between regular users, and root. This is where the risk/reward/cost thinking comes in.

First, is ssh access to regular user accounts necessary? Yes, for many people it is. For example, I store many important files on my server at home, and sometimes it is super convenient to be able to check one of those files while I'm at work or on the road. The alternative is about 2-3 hours in a car to drive home and back (if I were to completely close off outside network access to my home). I understand that there is some risk, random hackers could exploit a bug in ssh to get into my user account, and find out many secrets about me. That would be bad, and could lead to some embarrassment (they might for example find my medical records or tax returns), and to a lot of hassle (I would have to clean up after the hackers, make sure they didn't tamper with files, improve security). To reduce that risk somewhat (but not completely!), I hide my server and its ssh port (see above, IP address is not constant and not advertised in DNS, and ssh port is not where you expect it). This costs me a little convenience (I have to type "ssh -p 12345 ralphbsz@123.45.67.89" instead of "ssh house.ralphbsz.org"), but the cost is small, the improvement in risk is big.

Now, does the same logic apply to remote access to root? No. It is quite rare that I have to log in to root right away. And if I do want to become root, I can always log in to my normal user account, and then use su/sudo/doas. If I'm going to be doing stuff as root, then I'll have to spend some time being careful anyway, and typing one of those commands and a password is not a big inconvenience. On the other hand, random hackers are less likely to try to crack a system by first attacking user accounts and then trampolining to root from there; and if they did try that, the second stage of password checking would slow the down so much and leave so much audit trail, I might catch them. So for remote root access, the reward is not big, the risk reduction is very high, and the extra cost is minimal. Which is why I have turned all remote access to root off (key and password).

Could it be that ssh has a bug and turning off access to root doesn't actually work? Possible, but very unlikely. Even if that bug did exist, I would still have to have a hacker trying to crack my system, and the other defenses (audit, auto blacklist, unpublished IP address, unusual ssh port, ...) would hopefully be strong enough.

Now, the original question was: remote access is actually very necessary (for convenience) if one has to use rsync. I understand that, and I used to have the same problem: For certain copying operations, it would be more convenient to come in directly as root. But in my opinion, opening up key-based access to root for just this use case is not a good tradeoff, since this situation can be handled by other techniques, which just rely on file access control on the machine. Depending on your situation, your tradeoff might be different: how valuable is the data on your machine, how secure is your network access, how paranoid do you want to be?

Does this help explain the reasoning?

Adding a side remark: I just had an idea. It might be a nice idea to say that root login with key authentication is ONLY possible from a local network. This might be a nice compromise: say you have a local network 192.168.0/24, and you think you have firewalled it pretty well from the dangerous public internet. You think you can mostly trust people in the internal network. In that case, you might be able to have your cake and eat it to: allow ssh to root (just for the use case of rsync) only from the internal network, not from the outside. According to some chatter on stack exchange, one can do that by putting the "PermitRootLogin" line in sshd.conf into a match block, or by prefixing the key in ~root/.ssh/authorized_keys with "from 192.168.0/24". I haven't tried it yet, but just added this to my to-do list to test.
 
Hi Trihexagonal,

"As a personal pronoun (both subject and object), one can be used to refer to ‘people in general’. We often use one in making generalizations, especially in more formal styles. However, if one is used too much, it can make the speaker sound too formal." Cambridge University Press 2019, (Underlined emphasis supplied)

Kindest regards,

M
 
Hi ralphbsz,

thank you again for your patience. Let me restate your argument(s), to make sure that the problem is not on the transmitter as you suggest but at the receiver.

Regarding the bug, are you arguing that a bug can have more disastrous consequences when root is used instead of user? If one (LOL) accepts the premise, your argument makes sense to me.

Your fourth paragraph summarizes my situation. I am out of my office, often for rather extended period of time, and I need to access files on my central server and backup files, both system and user created, on my local machine. The former is generally not a problem, I can also do the login/su protocol. I attempted to solve the latter by having a portable storage for backups. The problem were inter alia, failure of the portable storage, discrepancy between the content of the local machine and the central server due to different data structures, which took an enormous amount of time to resolve, and the like.

Perhaps the realistic solution is a two-factor authentication.

Kindest regards,

M
 
Hi trev,

thank you for the reply. Do you mean local IP address? If so, how do you deal with change of local IP address due to mobility?

Kindest regards,

M
 
Hi trev,

perhaps I misunderstood. My server to which I login (a remote) has a static public address. My laptop, with which I travel (local) has an IP address generally assigned by a DHPC based on the (different) networks I connect to.

Kindest regards,

M
 
"As a personal pronoun (both subject and object), one can be used to refer to ‘people in general’. We often use one in making generalizations, especially in more formal styles. However, if one is used too much, it can make the speaker sound too formal." Cambridge University Press 2019, (Underlined emphasis supplied)

How would one reach the stars?

Does that sound like I'm asking for information detailing how I can make a warp10 journey to the Trihexagonal Nebula? Or how space travel to the stars can hope to be achieved?

I kind of like words and using them if you hadn't noticed. I broke the old able2know.com English Alliteration sub-forum record of 4 letters at 4 Word Alliteration using the same first 5 letters in each word of a 4 word sentence. Then did it again to raise the bar.
 
You got it!

Regarding the bug, are you arguing that a bug can have more disastrous consequences when root is used instead of user?
Exactly. If a hacker manages to get in, and it is "just" into a user account, that's a lot less bad.

I am out of my office, ... and I need to access files on my central server ...
Exact same situation here.

Perhaps the realistic solution is a two-factor authentication.
Definitely on my to-do list. I've seen lots of banks, internet companies, and government agencies rely on various hardware solutions. The RSA "SecurID" used to be very popular, but I hear that it's a pain to set up. Supposedly they are also quite expensive. These days the YubiKeys and the Google's Titan key seem like a good idea; they are reasonably cheap (under $50), and rumor has it that support for setting up ssh and https servers to authenticate with them isn't terribly hard. Just haven't found the time ... too many urgent projects and emergencies.

Another solution to the problem of getting to your documents: Use cloud storage, from a reputable vendor. Prices are variable, but most high-quality cloud providers do offer a "free tier" or 1-year free trials with capacity limitations. If you can get the total size down to about a TB, it is not very expensive. Just as a benchmark: Apple offers 2TB for US-$ or Euro 10 per month. If you're paranoid, you can encrypt your files (waste of time with a reputable vendor, but tin foil is cheap). I've thought about doing this for my document storage, but haven't had time to configure it. The real problem with this solution (for me) is to figure out which of the mountain of garbage that's stored on my server is important enough to be accessible. People who are more organized might like this solution better.
 
Hi trev,

perhaps I misunderstood. My server to which I login (a remote) has a static public address. My laptop, with which I travel (local) has an IP address generally assigned by a DHPC based on the (different) networks I connect to.

Kindest regards,

M
 
Hi ralphbsz,

I used the RSA "SecurID" when working for a corporation and it was rock solid all over the world. The YubiKeys moved from open source software to closed source and recently a significant bug was discovered in their implementation. I am thinking more along the lines of soft tokens, instead of hard one; there is Google authentication, there is S/Key and others.

Thus I think that this inquiry is thus solved. Thank you all who participated.

Kindest regards,

M
 
Back
Top