How strict are you about ssh host key checking?

cracauer@

Developer
I recently made the switch to distribute ssh host keys to new machines before the first ssh connection is made.

I have been strict about not knowingly changing ssh host keys when machines get replaced for a long time. So that I never have to ignore a change warning on my own hosts. Of course some machines are under control of people who don't care. Preaching has limited success. I think I recently got a warning from github of all places.

How about you? Don't tell me you've been there since 1996 :)
 
I'm fairly strict with them. I want to know why that host key changed. Because it often indicates a server has been replaced. Or something bad happened. The rule at $DAYJOB is that hostnames are never reused. Host keys should not, and do not, change. It typically means somebody messed up DNS again, because the 'old' hostname's IP did get reused for a new server. DNS at $DAYJOB is a mess, never let a couple of "Minesweeper Consultants & Solitaire Expert"s maintain DNS. Don't even get me started on repeatedly, and continuously, forgetting PTR records exists :rolleyes:

Haven't found a good, proper solution for load-balancing git+ssh though. If you send it to two or more servers you're either constantly confronted with changing host signatures, or you have to put the same host keys on all hosts.
 
I have always worked in more controlled environments. Generally, with automated host building, the host keys get generated, and installed, and become "known", very early in the process. Otherwise the deployment process (which generally relies on ssh) can hickup. For security reasons, they never change. The "automation system" is generally designed to deploy new hosts and retire old ones. Porting host keys to a "replacement" host would be quite unusual. You might do it to avoid disruptions on a failed host with special or unusual functions. e.g. our name servers co-habitated with time servers and were always physical because they were needed before the VM servers in a power-loss recovery. Changing their host keys would have been disruptive.
 
Don't tell me you've been there since 1996
The SSHFP RR in DNS was introduced by RFC 4255, so 10 years past 1996.

I put my SSHFP's in DNSSEC enabled DNS zones, i.c.w. 'VerifyHostKeyDNS yes' in ssh_config.
So that's very strict AND still convenient.

To generate the values to be placed in DNS, do
`ssh-keygen -r host.domain.tld`
 
Elaborating on my previous post, an even better solution seems DANE for SSH, which I haven't used myself (yet).
Or, use both, as it they can coexist!
Again requires DNSSEC, and the TLSA record, which looks like this:
Code:
_ssh._tcp.srv99.example.com  IN TLSA  3 1 1 9d98b9ae8fc...
3 1 1 means: DNA-EE (endpoint-entity), sha256, public-key

Below code I didn't use (yet), but it's easy to read what it does:
Code:
VerifyHostKeyDNS yes
Match Host *.example.com
        CheckHostIP no
        ProxyCommand dane ssh %h %p

The TLSA record now seems to strictly require DNSSEC, the SSHFP record does not.

Both the SSHFP and TLSA record currently work only on domain names - not on IP's (yet):
but, two weeks ago I wrote RFC I-D's to enable these records for IP addresses.
Very soon I will publish this joined draft - or will seperate it in multiple drafts.
Work in progress.
 
What about for home use? I have a few machines and many years ago I had a single host key that was distributed on all systems so I could easily identify them. Then, I used the same SSH key per user. For the past few years, I have had separate host keys and separate user keys per host as well. I distributed those ahead of time during build time, but lately, I am wondering if it is worth it and if something happens, then the system is not provisioned properly, then SSH doesn't work.

I am thinking to go back to a single key per user and perhaps just a single host key so I can just change them all at once easily and make distribution easy.
 
What about for home use? I have a few machines and many years ago I had a single host key that was distributed on all systems so I could easily identify them. Then, I used the same SSH key per user. For the past few years, I have had separate host keys and separate user keys per host as well. I distributed those ahead of time during build time, but lately, I am wondering if it is worth it and if something happens, then the system is not provisioned properly, then SSH doesn't work.

I am thinking to go back to a single key per user and perhaps just a single host key so I can just change them all at once easily and make distribution easy.
If it's a home system, then you can do whatever you like, but I personally don't like the idea of sharing just because it's not that big of a deal to approve a new key if I know there's a reason for it, and from then on there's no more interactions over that unless there's another chain.
 
Understood. Yes, it is setup once and forget.

Then, what are some alternatives? I have a laptop, a "host" which has a workstation jail and router jail. If I have the same user on all systems, would you have the same SSH key for that user on all systems or different for each system? What about SSH Host keys?
 
Understood. Yes, it is setup once and forget.

Then, what are some alternatives? I have a laptop, a "host" which has a workstation jail and router jail. If I have the same user on all systems, would you have the same SSH key for that user on all systems or different for each system? What about SSH Host keys?
I might be talking about different things. The host keys as I understand them are supposed to be on a per system basis. The SSH certificates for authenticating the user should be on per user basis as far as I've been able to tell. So the user certs would get copied as I'm not how it would even make sense for that to vary machine to machine. It's probably possible, but would defeat the purpose.
 
Admittedly, I'm not as vigilant as I should be about rotating keys, but I'm mitigated by all of my SSH stuff being in my internal private network and I rebuild/upgrade machines frequently so often end up generating new host keys. I'm curious how many folks add the optional password when they generate a user key, or do they assume the generated private key is adequate.
 
I recently made the switch to distribute ssh host keys to new machines before the first ssh connection is made.

I have been strict about not knowingly changing ssh host keys when machines get replaced for a long time. So that I never have to ignore a change warning on my own hosts. Of course some machines are under control of people who don't care. Preaching has limited success. I think I recently got a warning from github of all places.

How about you? Don't tell me you've been there since 1996 :)
At home on my own network, very strict. The host keys almost never change except when a -CURRENT patch does something to require key generation.

On extremely rare occasions I clone a new machine.

At $JOB I'm less strict. We manage about 1400 servers. Clients request server rebuilds. Other times servers need a rebuild for various reasons. Last year we regenerated new host keys using a different cipher. The fact that not everyone in the team communicates effectively one has to expect some "stuff" to happen.
 
I used passwords for a long while and used keychain to keep the ssh key open while my session was active and remove it when I locked my screen. Regarding host keys, I used the same host key for all of my systems, then generated unique ones, and am reverting back now mainly because of some headaches with sharing keys during my build process.

Eh, at the end of the day, it boils down to having a frictionless process. I build my systems using a script, what does your process look like and how frequently do you do rebuilds? I tend to do rebuilds every 3-6 months because it is relatively painless and I have a clean slate.
 
I'll admit I'm pretty sloppy about it at home. Fool's errand at work where everything is a short-lived container. I don't think we allow SSH at all anymore anyway.
 
Back
Top