For visiting a website over HTTPS securely, it's crucial that an HTTPS client application validates the server certificate. This is to ensure the integrity of the connection (not being a victim of a man-in-the-middle attack, for example).
The same goes for SSH connections, but the difference is that SSH does not typically use a certificate authority with certificate chains and trust anchor stores. Instead, in most scenarios the public key as a whole is checked for equality for a "known" one on your system that matches for this hostname on an earlier connection.
Do not confuse SSH host key verification (target machine) with client key authentication "authorized keys", nor with host based authentication; these three different keys each have their own purpose in the protocol.
It's very common to talk about "SSH keys" referring to authentication keys for you as a user, but in order to ensure integrity and security, the importance of validating SSH host keys should be understood.
In this post I explain how to manage a typical scenario in which you manage a moderate amount of hosts: Manage them manually in a file you put in version control, share it with your team and use it with your favourite automation tool like Ansible.
The typical SSH client configuration is to store the public key of the host key on the first connection.
By default, it stores the keys seen in
~/.ssh/known_hosts, via OpenSSH's
Have a look and open that file; it may include a lot of lines and you probably don't understand what line is for which
Whenever you connect to a host for the first time, it asks you to accept the key by its fingerprint. This looks like this, typically:
$ ssh email@example.com The authenticity of host 'host.tld (188.8.131.52)' can't be established. ED25519 key fingerprint is SHA256:eUXGGm1YGsMAS7vkcx6JOJdOGHPem5gQp4taiCfCLB8. Are you sure you want to continue connecting (yes/no)?
A major problem with this "Trust On First Use" (TOFU) model is that you will just type yes and 'be done with it', instead of going through the pain of validating it. 🙈 I mean, who wouldn't be annoyed by this message and reluctant to carefully check, right? This problem grows with the size of your team, as every member has to validate the key fingerprint for himself in that prompt!
Even more so when you reinstall a server or change the IP of a server, you (and each member in your team) will be presented with this scary message:
$ ssh firstname.lastname@example.org @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the ED25519 key sent by the remote host is SHA256:PqpHHVHz+guHaNZMoQIhTKJsE4ByH1xkkM9qfTJOheI. Please contact your system administrator. Add correct host key in /home/myusername/.ssh/known_hosts to get rid of this message. Offending ED25519 key in /home/myusername/.ssh/known_hosts:3 remove with: ssh-keygen -f "/etc/ssh/ssh_known_hosts" -R "10.1.2.3" ED25519 host key for 10.1.2.3 has changed and you have requested strict checking. Host key verification failed.
The thing you will probably do is just follow that suggested command to remove the key stored, no? 🙈 Well, that may provide an actual attacker his way in... 🤦
I strongly believe that this important warning should never become something that will be "annoying" to go ignored in practice and undermine the security properties of the SSH protocol. With more careful management of the "known hosts" file one can ensure that such a message will only be displayed with a true positive for an attack or network misconfiguration.
By default, OpenSSH would check several "known hosts" files on your filesystem to see if a key matches the one the
target machine is using.
GlobalKnownHostsFile SSH client settings are used to specify paths for "known hosts"
The "user" one is written to and read from, the "global" one is considered read-only by the SSH client.
Each setting can also take multiple files, but unlike the
Include setting, dit does not let you specify a ".d"-style
directory with separate files, unfortunately. 😞
The contents of a "known hosts" file is straightforward; each line consists of the machine's host name or IP address, the key and some optional comments, each of these fields space-separated. A full explanation on the format can be found in the sshd(8) manpage. Why this is explained in the SSHD (server) manpage rather than the SSH client ssh(1) manpage is a mystery to me. It's SSH clients using this logic, after all, and you generally do not have an SSH server installed on client machines. 🤷
An excerpt from the manpage:
SSH_KNOWN_HOSTS FILE FORMAT The /etc/ssh/ssh_known_hosts and ~/.ssh/known_hosts files contain host pub‐ lic keys for all known hosts. [...] Each line in these files contains the following fields: markers (optional), hostnames, keytype, base64-encoded key, comment. The fields are separated by spaces. [...] Hostnames is a comma-separated list of patterns (‘*’ and ‘?’ act as wild‐ cards); each pattern in turn is matched against the host name. [...] When ssh(1) is authenticating a server, this will be the host name given by the user, the value of the ssh(1) HostkeyAlias if it was specified, or the canonical server hostname if the ssh(1) CanonicalizeHostname option was used. [...] Alternately, hostnames may be stored in a hashed form which hides host names and addresses should the file's contents be disclosed. Hashed host‐ names start with a ‘|’ character. Only one hashed hostname may appear on a single line and none of the above negation or wildcard operators may be applied. The keytype and base64-encoded key are taken directly from the host key; they can be obtained, for example, from /etc/ssh/ssh_host_rsa_key.pub. The optional comment field continues to the end of the line, and is not used. Lines starting with ‘#’ and empty lines are ignored as comments. When performing host authentication, authentication is accepted if any matching line has the proper key; [...]
For privacy concerns over a compromised "known hosts" file, the SSH client may store the hash of host names instead of its actual host name. I generally don't really like hashed host names as I consider it a disadvantage to be unable to inspect what's in this "trust store". 🙄 Also, it does not really make sense to obfuscate what hosts you manage over SSH when you also have your inventory of systems in plain text (your Ansible inventory for example). 😅
The use of plain text hostnames gives you the following benefits:
- It's a little easier to manually craft the file, and, a hidden gem...
- it automagically gives you tab-completion in your shell for all hosts. 😍
Shell completion scripts included in most distributions.
Given the large size of (secure) RSA type of public keys, I would definitely recommend taking only EC keys—of which the
shortest is an Ed25519 key.
Any modern OpenSSH installation now should be equipped with
I'm a bit surprised to see that the
known_hosts file only takes full public keys and does not allow you to specify
the much shorter (SHA256) hash of it—as it does in the prompt.
Let me know if you know a reason why this is or if there's any plans out there to include that.
An easy way to get started is using the
For example, here's how to create your own "known hosts" file that contains the line for the host on IP
srv1.mydomain.tld and default SSH port 22:
user@myhost:~$ ssh-keyscan -t ed25519 srv1.mydomain.tld \ | tee -a ~/teamrepos/ansible/files/ssh_known_hosts
No validation with
This key is obtained without any validation.
You may want to validate it by comparing the contents on the server over another secure channel, remote location
Do this once for every host, it's an investment for the benefit of security.
Repeat this for all your hosts and it should look like this:
srv1.mydomain.tld ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFsS76WSPEm8JbUTt6hSFs3iVQlNZp4oJYLmCPylr2ry srv1.mydomain.tld ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFpnbOdxKmTdgmLnW/0lHSOVfQSmS6Ob4+jjKKSzoFe6
You can add aliases or IP addresses separated by commas so that it will work for those as well, e.g.:
srv1.mydomain.tld,10.1.2.3,[2a05:f480:1400:246:5400:2ff:fe35:52ff] ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFsS76WSPEm8JbUTt6hSFs3iVQlNZp4oJYLmCPylr2ry
I would recommend only setting the fully qualified domain here, as shortened aliases may match more than once which impacts security and may complicate troubleshooting. Note that you will have tab-completion for your ease and it's also still possible to create Host aliases in the SSH configuration (see below).
Now let's configure the OpenSSH client for strict checking host keys against this "known hosts" file.
~/.ssh/config put this, follow the comments inline:
Host * # This points to the manually managed "known hosts" file. GlobalKnownHostsFile ~/teamrepos/ansible/files/ssh_known_hosts # Instead of 'ask', always be strict! StrictHostKeyChecking yes # This disables the default behaviour of writing to a local # ~/.ssh/known_hosts file. UserKnownHostsFile /dev/null # I like to disable this setting or else OpenSSH attempts to # write to the UserKnownHostsFile with an entry based on the IP. # It would also show a tedious warning on every connection: # Warning: Permanently added the ED25519 host key for IP address [...] # However, that would be written to /dev/null... 😅 CheckHostIP no
In case you already have a local SSH configuration in
~/.ssh/config, I'd suggest to add the stanza at the very bottom
of that file.
Now try to use the tab-completion!
It should work with just
System administrators of workstations may consider adding such a configuration also system-wide, in
In case you use Host aliases for short-hands or connecting over an alternative IP address like below, but you don't like
to add its alias in the "known hosts" file for everyone, the
HostKeyAlias option can point to the host key this host
should match with, e.g.:
Host srv1-out-of-band # Special out of band IP in case internet connection is down! Hostname 10.99.99.99 HostKeyAlias srv1.mydomain.tld
In case you need to fetch/clone/pull/push to GitHub over SSH, or any other public server for that matter that you don't really manage yourself, you may want to add these to a separate "known hosts" file like this:
user@myhost:~$ ssh-keyscan github.com \ | tee -a ~/.ssh/known_hosts_personal
And then configure this as additional "known hosts" file like this:
Host * # This points to the manually managed "known hosts" file. GlobalKnownHostsFile ~/.ssh/known_hosts_personal ~/teamrepos/ansible/files/ssh_known_hosts
You can configure a pattern of hosts in OpenSSH client with relaxed checks, or even none. This avoids having to accept a key every time on automated re-installation of hosts or reprovisioning scripts regenerating the host key on purpose.
Add something like this to your SSH configuration:
Host *.localdomain # No host key checks for anything on .localdomain! StrictHostKeyChecking no
With a typical case of an Ansible repository shared with your team, you could do the following:
- Add the managed "known hosts" file in the repository.
For example in
[ssh_connection] # Default ssh_args if not specified are: # "-C -o ControlMaster=auto -o ControlPersist=60s" # Add the SSH configuration inline to use strict host key # checking against the in-repository ssh_known_hosts file. ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=./files/ssh_known_hosts -o StrictHostKeyChecking=yes
Now profit from out-of-the-box secure SSH connections!
Benefits for this setup in a small team include a) that changes to SSH host keys can happen through regular change processes like version control / code review / merge requests. b) Everyone will be "in sync" with the actual host keys. Whenever a new host has been deployed or an existing one has been reinstalled by a team member, you just pull in the (git) changes. c) No need for every team member to perform an interactive "dance" with SSH on the commandline to validate and store the host key locally. ✅
Traditionally, several solutions are widely described and available to scaling up this in a larger organization. They do come with some pain and effort, though.
- OpenSSH supports a limited form of X.509 TLS-like 'certificates'.
By creating an SSH certificate authority you could sign all individual host keys and install the trusted CA public key on the clients to validate the host keys. Problems that remain with this: revocation, additional work with private keys to be accessible when (re)installing hosts and limited support for this in SSH clients. If you're interested in a solution in this direction, look at this blog by Facebook Engineering with a more specific use case or this fork by Roumen Petrov of OpenSSH that properly support X.509 v3 with CRL and even OCSP stapling.
- Leverage DNSSEC-enabled DNS and publish SSH host key fingerprints with SSHFP records.
This is truly a very elegant olution, but requires a proper DNSSEC deployment and support on all clients.
It also only accounts for accessing hosts with their FQDN, where a plain
known_hostsfile allows you to add aliases in any form with tab completion as a bonus.
- Centralized deployments with a database like LDAP. This could be cool, but also fragile. When this fails, you won't be able to access any machine, and it could be a single point of failure in terms of security as well.