# service slapd start failed the first time with the error error: grep: /usr/local/etc/openldap/slapd.d/cn=config/olcDatabase=*: No such file or directory. Another stupid compound typo… when I created the folder /usr/local/etc/openldap/slapd.d/, I mis-named it slap.d/. Not only that, but my slapadd command to parse the ldif file into the database folder had the same typo, so it worked (the folder existed). Of course, rc went looking for the correct folder name and couldn’t find it… c_rehash . has been depricated and removed from the openssl package. The correct command to use is openssl rehash .olcTLSCACertificateFile: attibute in slapd.ldif. Seems to work fine… both CAs have their certs in /usr/share/certs/trusted/, and I made sure those were recognized by the system using certctl rehash.slapd_flags needs to also have an entry ldaps://0.0.0.0/. If that’s not there, rc won’t launch the service with TLS enabled and there’s no possiblity of getting it to work. This was the last piece I needed to get secure TLS communication working. ldapadd -Y EXTERNAL -H ldapi:/// -f 01.config.InstallMemberOf.ldif it failed with an insufficient access error. This one took me a bit to figure out, and I learned something along the way (bonus!). My first slapd.ldif was based on the example provided with the FreeBSD openldap26-server pkg install. That example does not allow this command to run out of the box.olcAccess: to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth manage by * break. Applying this to both the frontend and config database sections solved the issue. Given that the Ubuntu template seems a little more modern and full-featured than the one that FreeBSD ships, I switched my entire template over to the Ubuntu version and customized from there. kadmind: LDAP bind dn value missing while initializing, aborting.# pkg install openldap26-client cyrus-sasl-gssapi krb5 pam_ldap sssd2 sudo-sssd# certctl rehashsssd_enable="YES" to /etc/rc.confcreate_homedir=False in /usr/local/etc/sssd/sssd.conf. I did this because I didn't have time to play around with the pam_zfs_key.so module that supports creation of encrypted /home/$user datasets on first login. Given the extremely low number of people using my systems (4 total, and 2 of those almost never), I'm fine with manually setting up home datasets for family before they can log on to a host. There really won't be that many hosts that allow local logon... if the situation ever changes I'll dig in and look at enabling automatic home folder / dataset creation, but for now I'll skip that potential headache.create_homedir = True in sssd.conf for this client.sudoers: files sss# ls with flags that show ownership, I never looked hard enough to notice that the results were returning without any GID to name translation, e.g. -rw-r--r-- 1 firstuser 10002 208 Mar 16 20:10 somefile. -rw-r--r-- 1 firstuser firstuser 208 Mar 16 20:10 somefile.mech_list: GSSAPI PLAINas the first line of /usr/local/lib/sasl2/slapd.conf. This was too restrictive, in that it disallows the system root user to LDAP admin translation we took advantage of during setup.# ldapadd -y EXTERNAL -H ldapi:/// -f domainHost.ldif# kadmin.local > addprinc -x dn=cn=laptop1,ou=hosts,dc=mydomain,dc=com -randkey host/laptop1.mydomain.com (the -x flag tells kadmin to attach the new principal to the specified LDAP entry, instead of just placing it in the default kerberos subtree of dc=mydomain,dc=com) > ktadd -k /root/laptop1.keytab host/laptop1.mydomain.com > exit# kinit -k -t /etc/krb5.keytab host/laptop1.mydomain.com.mech_list: GSSAPI PLAIN EXTERNAL. # slappasswd -o module-load=pw-sha2 -h '{SSHA512}' -s 'your-password-here'# ldapadd -x -D 'cn=admin,dc=mydomain,dc=com' -W -H ldapi:/// -f replicator.ldifdn: uid=replicator,ou=accounts,dc=mydomain,dc=com
objectClass: domainAccount
objectClass: simpleSecurityObject
cn: replicator
name: replicator
sn: replicator
uid: replicator
userPassword: {SSHA512}++Redacted==
description: Replication user
# ldapmodify -Q -Y EXTERNAL -H ldapi:/// -f acl_update.ldifdn: olcDatabase={1}mdb,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.exact="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" manage by dn="cn=admin,dc=mydomain,dc=com" manage by dn="uid=firstuser,ou=accounts,dc=mydomain,dc=com" manage by dn="uid=replicator,ou=accounts,dc=mydomain,dc=com" read by dn="uid=ldapbinduser,ou=accounts,dc=mydomain,dc=com" read by dn="uid=kdc-service,ou=accounts,dc=mydomain,dc=com" read by dn="uid=kadmin-service,ou=accounts,dc=mydomain,dc=com" write by * break
olcAccess: {1}to dn.children="ou=accounts,dc=mydomain,dc=com" attrs=userPassword,shadowExpire,shadowInactive,shadowLastChange,shadowMax,shadowMin,shadowWarning by self write by anonymous auth
olcAccess: {2}to dn.subtree="dc=mydomain,dc=com" by self read
olcAccess: {3}to attrs=krbPrincipalKey by anonymous auth by dn.exact="cn=kdc-service,ou=accounts,dc=mydomain,dc=com" read by dn.exact="cn=kadmin-service,ou=accounts,dc=mydomain,dc=com" write by self write by * none
olcAccess: {4}to dn.subtree="cn=krbContainer,dc=mydomain,dc=com" by dn.exact="cn=kdc-service,ou=accounts,dc=mydomain,dc=com" read by dn.exact="cn=kadmin-service,ou=accounts,dc=mydomain,dc=com" write by * none
# ldapmodify -Q -Y EXTERNAL -H ldapi:/// -f provider_simple_sync.ldif# Add indexes to the frontend db
dn: olcDatabase={1}mdb,cn=config
changetype: modify
add: olcDbIndex
olcDbIndex: entryCSN eq
-
add: olcDbIndex
olcDbIndex: entryUUID eq
# Load the syncprov module
dn: cn=module{0},cn=config
changetype: modify
add: olcModuleLoad
olcModuleLoad: syncprov
# syncrepl Provider for primary db
dn: olcOverlay=syncprov,olcDatabase={1}mdb,cn=config
changetype: add
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
olcSpCheckpoint: 100 10
olcSpSessionLog: 100
olcMemberOfDangling: ignore on the master LDAP instance. Create fix_memberOf.ldif and apply with# ldapmodify -Q -Y EXTERNAL -H ldapi:/// -f fix_memberOf.ldifdn: olcOverlay={0}memberof,olcDatabase={1}mdb,cn=config
changetype: modify
replace: olcMemberOfDangling
olcMemberOfDangling: ignore
olcSaslHost in step 17 to use the FQDN of the backup LDAP jail.# ldapmodify -Q -Y EXTERNAL -H ldapi:/// -f consumer_simple_repl.ldifdn: cn=module{0},cn=config
changetype: modify
add: olcModuleLoad
olcModuleLoad: syncprov
dn: olcDatabase={1}mdb,cn=config
changetype: modify
add: olcDbIndex
olcDbIndex: entryUUID eq
-
add: olcSyncrepl
olcSyncrepl: rid=0
provider=ldap://ldapjail.mydomain.com
bindmethod=simple
binddn=uid=replicator,ou=accounts,dc=mydomain,dc=com credentials=<actual password here (not the hash)>
searchbase=dc=mydomain,dc=com
schemachecking=on
type=refreshAndPersist retry=”60 10 300 +”
starttls=critical tls_reqcert=demand
-
add: olcUpdateRef
olcUpdateRef: ldap://ldapjail.mydomain.com
contextCSN on both the primary and secondary LDAP jails: # ldapsearch -Y EXTERNAL -H ldapi:/// -b "dc=mydomain,dc=com" -s base contextCSN. Once both numbers are the same you know that replication has completed and you have a full copy on your secondary LDAP jail. Try querying the secondary from a client machine - it should give you good results, now.kdc = ldapjail.mydomain.com and kdc = ldapjail2.mydomain.com in the MYDOMAIN.COM section of the [realms] heading. ldapjail2 should be first so that the local Kerberos queries the local LDAP.mech_list: GSSAPI PLAIN EXTERNAL as the first line of /usr/local/lib/sasl2/slapd.confkadmind_enable="NO"# kadmin.local > addprinc -randkey host/ldapjail2.mydomain.com > addprinc -randkey ldap/ldapjail2.mydomain.com > ktadd -k /root/ldapjail2.keytab host/ldapjail2.mydomain.com ldap/ldapjail2.mydomain.com > exitroot:ldap.# service slapd restart# service kdc start# service saslauthd start# kinit -k -t /etc/krb5.keytab host/ldapjail2.mydomain.comkdc = in the [realms] MYDOMAIN = { section... one for ldapjail.mydomain.com and one for ldapjail2.mydomain.com.ldap_uri and krb5_server lines to include the secondary. These are comma separated entries, so just append ,ldapjail2.mydomain.com.-o special_small_blocks=XXXX flag. The docs say this should set special_small_blocks to 0, which means these datasets will only store metadata on the special vdev. Perfect. When I copied over the files from my backup media to these new datasets, I monitored the progress using # zpool iostat -v tank 5, which let me watch as the data was written to the raidz3 at approximately the read speed I'd expect from the external USB HDD that was temporarily holding it. 250MB/s, or thereabouts.-o special_small_blocks=131072 when setting up these datasets. 128kB seemed like a reasonable break point, given the size distribution of my files. When I copied the files over from deep storage to the new datasets, however, I didn't see what I expected. # zpool iostat -v tank 5 showed that nearly all of my files were going to the special vdev. Once or twice I'd see a single write IOP on the raidz3 vdev, but not nearly the number I'd expect. I also expected to see the counter showing allocated storage on both vdevs tick up, but it only ticked up on the special vdev.