A while back I wrote an introduction post about vSphere Integrated Containers. Since that post the product was updated several times bringing new features and fixes. In this post I want to look at using an NFS share as volume store. This can then be used as persistent shared storage between containers.
I tried to get it running with version 1.1 back then but I wasn’t able to get it working. I discussed the issue with the VIC team at the time. We came to the conclusion that something weird was going on in how the Virtual Container Host was handling the NFS mounting. Now, a few versions later it looks like these issues are fixed. That is why I thought I’d give it another shot.
Background
Containers are by default stateless, so when containers live and die the data inside them is not retained. Ok, technically data is stored on the ‘tmpfs mount’ and it will survive a stop and start. When using micro services, containers are spun up when needed and then removed again. vSphere Integrated Containers provides two options or persistent storage. The first is using a vSphere VMDK as volume and the second is using an NFS share. Now, if you want to share data between two or more containers a VMDK based volume store will not work because it can only be mapped to one container at the time, which would serve if containers are spun up and down using the same volume (and data on that volume). If you need to share data between multiple containers NFS is the way to go with vSphere Integrated Containers.
The setup
In my lab I am running vSphere 6.7 and vSphere Integrated Containers 1.4. The NFS share I am going to be using is running on my (very old) QNAP ts-212 NAS. The first thing I did was setup an NFS share with the following characteristics.
"/share/MD0_DATA/Containers" *(rw,async,no_subtree_check,insecure,no_root_squash)
Note here that I am using ‘no_root_squash’ which is an important choice as it can influence the way the Virtual Container Host and the actual containers interact with the NFS server and how ownership and rights need to set on the NFS server. In this case, since this is a test environment, I can get away with ‘no_root_squash’ and have the root user own the shared directory. If you need more security because you are setting this up in a production environment you would need to set ‘root_squash’ and have the anonymous user own the shared directory. Also don’t forget to make root a member of the anonymous group.
Virtual Container Host
The next thing we need is a Virtual Container Host. After downloading the vic engine binaries I ran
vic-machine-darwin create with the following settings to set up my Virtual Container Host.
./vic-machine-darwin create \ --target vcenter.home.local \ --user administrator@vsphere.local \ --password '<password>;' \ --name vch-01 \ --compute-resource Prod-01 \ --image-store FlashDatastore02 \ --volume-store FlashDatastore02:default \ --volume-store 'nfs://192.168.1.10/Containers?uid=0&gid=0:nfs' \ --bridge-network dv-vic-network \ --public-network dv-vm-network \ --public-network-ip 192.168.1.81/24 \ --public-network-gateway 192.168.1.1 \ --dns-server 192.168.1.109 \ --container-network dv-container-network:vic-containers \ --cng dv-container-network:192.168.1.1/24 \ --cnd dv-container-network:192.168.1.109 \ --cnr dv-container-network:192.168.1.85-192.168.1.90 \ --cnf dv-container-network:open \ --no-tlsverify \ --tls-cname vch-01.home.local \ --tls-cert-path /Users/wvanede/Downloads/vic/certs \ --organization 'homelab' \ --certificate-key-size 3072 \ --thumbprint <vCenter server thumbprint>
You will notice that I have used a lot more options than we are using just for this post so I’ll not go over every option. The option we are most interested in for now is the
--volume-store 'nfs://192.168.1.10/Containers?uid=0&gid=0:nfs'
This tells the Virtual Container Host to use this NFS share when a Docker volume is created with the label ‘nfs’. The uid=0 and gid=0 tells the Virtual Container Host to connect as root user to the NFS share. This is also relevant to the squash options described earlier.
When I log in to my QNAP NAS with SSH I can see that the volume directories are already created with the root account (in QNAP’s case the admin account but the GID and UID are 0).
[/share/MD0_DATA/Containers] # ls -la drwxrwxrwx 5 admin administ 4096 Jul 10 15:16 ./ drwxrwxrwx 28 admin administ 4096 Jul 10 15:00 ../ drwxrwxrwx 2 admin administ 4096 Jul 10 15:00 @Recycle/ drwxrwxrwx 3 admin administ 4096 Jul 11 12:28 volumes/ drwxrwxrwx 3 admin administ 4096 Jul 11 12:28 volumes_metadata/
Docker volume
With the Virtual Container Host set up we can now set our volumes with the docker volume create command. We need to tell it to use the ‘nfs’ label so it will use the NFS share in the background to create the volume.
docker --tls volume create --name nfs-share --opt VolumeStore=nfs
I am using to parameters with this command. The first one –name obviously names the volume. This is the name we pass on when creating a container. The –opt VolumeStore=nfs tells the Virtual Container Host to create the volume on the NFS share.
When I now log in to the QNAP with SSH I see that the directory for this particular volume has been created.
[/share/MD0_DATA/Containers/volumes] # ls -la drwxrwxrwx 3 admin administ 4096 Jul 11 12:28 ./ drwxrwxrwx 5 admin administ 4096 Jul 10 15:16 ../ drwxr-xr-x 2 admin administ 4096 Jul 11 12:29 nfs-share/
Starting the container
To test if the volume is working I will spin up a busybox container and connect it to the
volume ‘nfs-share’.
docker --tls run -t -d --name nfs-test -v nfs-share:/mnt/nfs busybox docker --tls exec nfs-test touch /mnt/nfs/test docker --tls exec nfs-test3 ls /mnt/nfs/ test
The commands above create and start the container, add a new file to the location where the NFS share is mounted and it is displaying the file. If you now spin up another container and point it to the same volume they would both be able to use the NFS share in the backend.
From the QNAP it looks like this.
[/share/MD0_DATA/Containers/volumes/nfs-share] # ls -la drwxr-xr-x 2 admin administ 4096 Jul 11 14:02 ./ drwxrwxrwx 3 admin administ 4096 Jul 11 12:28 ../ -rw-rw-rw- 1 admin administ 0 Jul 11 14:02 test
Conclusion
There you have it, NFS with vSphere Integrated Containers. Using NFS as a backend for your volumes is very useful for getting data out of or shared between containers. It is currently also the only way, unless you have some sort of replication mechanism built-in to your containers. Of course it depends on the type of data you are getting into the container in the first place. There could well be a database backing your container, but if you are working with large files for example, NFS backed volumes could be the best option.