Adding Encrypted Disks to FreeBSD

Since FreeBSD 10.0 the system installer has included an option to automatically encrypt the disks upon installation, which is great, before that it was quite a pain. This is going to focus on adding encrypted disks after installation. In this example I will be adding a pair of drives to a FreeBSD system in a ZFS mirror on my backup server. This will be starting with empty disks, it is important to know that the data on these drives will be destroyed in this process, so be sure to backup anything you want to copy back to these drives.

We're going to start out by identifying the disks that we are going to encrypt then mirror. They are a pair of Toshiba disks and we can easily locate this like this:

root@archive:~ # camcontrol devlist  
<TOSHIBA MD04ACA400 FP2A>          at scbus0 target 0 lun 0 (pass0,ada0)  
<TSSTcorp DVD+-RW TS-L633C DW40>   at scbus1 target 0 lun 0 (cd0,pass1)  
<M4-CT064M4SSD2 0309>              at scbus2 target 0 lun 0 (pass2,ada1)  
<TOSHIBA MD04ACA400 FP2A>          at scbus3 target 0 lun 0 (pass3,ada2)  
<AHCI SGPIO Enclosure 1.00 0001>   at scbus4 target 0 lun 0 (ses0,pass4)  

So we're dealing with /dev/ada0 and /dev/ada2. These device names may change in the future if they are ever moved around so it's a good idea to label them first and then reference the label rather than the device name. To do that you will need to determine a descriptive label for each disk, it can be a serial number, slot number, whatever helps you identify the disk when you may need to replace it.

Some prefer to make a partition a 100MB to a few GB smaller than the actual disk so when it comes time to replace a failed drive you don't have to find a disk with the identical amount of space, but I have extra drives of this model on hand so I'm not going to worry about it. I will be building a new backup server in the near future anyway, so it isn't really a concern. Adding a buffer in large arrays with dozens of disks you will want to consider the loss of drive space per drive across your entire pool. In zpools with several dozen or hundred drives you can be looking at a significant amount of wasted space. If you do want to leave that buffer space then create a partition using gpart and use glabel to label the partition instead of the entire drive, a tutorial on that can be found here.

Here I am going to be dealing with an internal and an external drive so I will label them as such with glabel like this:

root@archive:~ # glabel label int_4tb /dev/ada0  
root@archive:~ # glabel label ext_4tb /dev/ada2  

Labels can be found and verified with glabel status. Now lets create an encryption key for each of them, this way the drives will be mirrored but examination of them without the keys and passphrase they appear to contain completely different data. In root's home directory we'll create a folder called geli and generate a 64-bit key of random data for each disk:

root@archive:~ # mkdir /root/geli  
root@archive:~ # dd if=/dev/random of=/root/geli/int_4tb.key bs=64 count=1  
1+0 records in  
1+0 records out  
64 bytes transferred in 0.000040 secs (1587026 bytes/sec)  
root@archive:~ # dd if=/dev/random of=/root/geli/ext_4tb.key bs=64 count=1  
1+0 records in  
1+0 records out  
64 bytes transferred in 0.000040 secs (1592476 bytes/sec) 

The next step is to configure the drives with geli to encrypt them. I will be using the entire drive rather than first creating a partition. So let's use geli to initialize those drives with their keys AND a passphrase:

root@archive:~ # geli init -s 4096 -K /root/geli/int_4tb.key /dev/label/int_4tb  
Enter new passphrase:  
Reenter new passphrase: 

Metadata backup can be found in /var/backups/label_int_4tb.eli and  
can be restored with the following command:

        # geli restore /var/backups/label_int_4tb.eli /dev/label/int_4tb

root@archive:~ # geli init -s 4096 -K /root/geli/ext_4tb.key /dev/label/ext_4tb  
Enter new passphrase:  
Reenter new passphrase: 

Metadata backup can be found in /var/backups/label_ext_4tb.eli and  
can be restored with the following command:

        # geli restore /var/backups/label_ext_4tb.eli /dev/label/ext_4tb

Now these keys in the /root/geli directory are important, you will want to keep a copy of them somewhere not on the system. I have an SSD for my main drive in this system, so when that fails, I will not be able to access my backups without those keys. I recommend you keep a copy on a thumb drive or CD-R stored in a safe location. I also keep a copy of the metadata backup in the event I need to restore those as well. This can be done like this:

root@archive:~ # tar -cvzf ~/archive-geli-backup-`date '+%F'`.tar.gz ~/geli/* \  
/var/backups/label_int_4tb.eli \

Next we're going to attach the drives so we can work with the encrypted partitions. To attach the drives we run:

root@archive:~ # geli attach -k /root/geli/int_4tb.key /dev/label/int_4tb  
Enter passphrase:  
root@archive:~ # geli attach -k /root/geli/ext_4tb.key /dev/label/ext_4tb  
Enter passphrase:

We now have our encrypted drives avaialble to the system. On FreeBSD with GELI you would now work with the decrypted device names /dev/label/int_4tb.eli and /dev/label/ext_4tb.eli devices to write to them, if you write directly to the block devices (/dev/ada0 or /dev/label/int_4tb) at this point you will destroy your data! The .eli devices are only available when the drives has been attached using geli with the proper passphrase and key. Only write to the .eli devices.

The next step is to create the mirrored zpool with our two decrypted devices:

root@archive:~ # zpool create backup01 mirror /dev/label/int_4tb.eli /dev/label/ext_4tb.eli  
root@archive:~ # zpool status  
  pool: backup01
 state: ONLINE
  scan: none requested

    NAME                   STATE     READ WRITE CKSUM
    backup01               ONLINE       0     0     0
      mirror-0             ONLINE       0     0     0
        label/int_4tb.eli  ONLINE       0     0     0
        label/ext_4tb.eli  ONLINE       0     0     0

errors: No known data errors  

root@archive:~ # zfs list  
backup01             264K  3.51T    96K  /backup01  
---- snip ----

Upon rebooting the drives will need to be reattached manually, this is done intentionally. If the machine itself is physically stolen, I don't want the thief to have access to my backups. To reattach these encrypted disks after a reboot or powercycle, run the same commands geli attach -k /root/geli/int_4tb.key /dev/label/int_4tb and geli attach -k /root/geli/ext_4tb.key /dev/label/ext_4tb. If you are using ZFS like in this example first you will want to export the zpool via zpool export backup01, then attach your encrypted disks, then import the zpool back, like this:

root@archive:~ # zpool export backup01  
root@archive:~ # geli attach -k /root/geli/int_4tb.key /dev/label/int_4tb  
root@archive:~ # geli attach -k /root/geli/ext_4tb.key /dev/label/ext_4tb  
root@archive:~ # zpool import backup01

This is a script I keep in /root/bin to do this for me, prompting me for passphrases when I try to attach the drives and import the zpool:

/sbin/zpool export backup01
/sbin/geli attach -k /root/geli/int_4tb.key /dev/label/int_4tb
/sbin/geli attach -k /root/geli/ext_4tb.key /dev/label/ext_4tb
/sbin/zpool import backup01
# Print out the status
/sbin/zpool status backup01 | /usr/bin/grep "^[[:space:]]backup01" | /usr/bin/awk '{ print $1 " " $2 }'
# Restart services that depend on this zpool
/usr/sbin/service nfsd restart && /usr/sbin/service mountd restart
Show Comments