Introduction
Kubernetes, with all its greatness, still needs a place where to reliably store data and make that data persist across pod restarts and nodes. Here comes NFS or Network File Share, NFS has been around for many years, created by Sun Microsystems (now owned by Oracle) way back in 1984. Now in version 4 is still a viable solution with clients for most operating systems, Commonly used in Data Centers for VMware ESXi and other virtualization solutions to move storage across server fleets. Kubernetes provides native NFS support; no need to install custom controllers.
NFSv3 or NFSv4
One crucial difference with v3. v4 is stateful, open, writing, reading, and locking have a state that is maintained by the server. On the other part, NFSv3 is stateless, and the client performs operations like locking; an application can lose its lock on network interruption. Locking in NFSv4 is leased; this requires a client to keep contact with the server to preserve the lock.
Install NFS support in each Kubernetes node
Kubernetes uses native Linux NFS client to mount volumes. In Ubuntu install it with
$ apt update && sudo apt upgrade -y
$ sudo apt-get install nfs-kernel-server nfs-common -y
Verify connectivity NFS server.
I learned this command long ago; in 1998! When trying to “inspect” an sun E10000 I was not supposed to mingling with. It’s still usefull for inspecting NFSv3 shares.
Note: casa is the name of my NAS file server.
$ showmount -e casa
Exports list on casa:
/mnt/volume01/home 192.168.1.0 192.168.10.0
Is always good to verify nfs volumes access and the version. First try mounting a nfsv4 volume.
Create test mount point
$ sudo mkdir /mnt/test-nfs
Attempt to mount a volume using nfsv4
$ sudo mount -t nfs4 -o vers=4.1 casa:/test /mnt/test-nfs
If that does not work try nfsv3 for nfs3 my nas requires I specify the server path (ie /mnt/volume01/k3s-storage/test)
$ sudo mount -v -t nfs -o vers=3 casa:/mnt/volume01/k3s-storage/test /mnt/test-nfs
Write test data to nfs volume
dd if=/dev/random of=/mnt/test-nfs/random-data.dat bs=64k count=1000
Test NFS type
Run nfsstat command line. It will output NFS statistics for NFSv3 and NFSv4, Notice with protocol has write and read activity. In the shown example, the mounted volume is NFsv4.
$ nfsstat
Client rpc stats:
calls retrans authrefrsh
195029 0 195030
Client nfs v3:
null getattr setattr lookup access
0 0% 0 0% 0 0% 0 0% 0 0%
readlink read write create mkdir
0 0% 0 0% 0 0% 0 0% 0 0%
symlink mknod remove rmdir rename
0 0% 0 0% 0 0% 0 0% 0 0%
link readdir readdirplus fsstat fsinfo
0 0% 0 0% 0 0% 0 0% 0 0%
pathconf commit
0 0% 0 0%
Client nfs v4:
null read write commit open
34 0% 0 0% 95606 99% 61 0% 61 0%
open_conf open_noat open_dgrd close setattr
0 0% 0 0% 0 0% 60 0% 189 0%
fsinfo renew setclntid confirm lock
22 0% 0 0% 24 0% 4 0% 0 0%
lockt locku access getattr lookup
0 0% 0 0% 18 0% 24 0% 138 0%
...
Create a NFS PersistentVolume.
- Creane namespace
$ kubectl create namespace playground
namespace/playground created
- Define a volume group
We will be writting all volumes under base /k3s-storage. PersistentVolumes only allow mounting with one accessMode.
the following access modes are available for NFS:
– ReadWriteOnce the volume can be mounted as read-write by a single node
For databases and data stores where data consistency is paramount. Clasic examples
Oracle, MySQL, Postgress.
– ReadOnlyMany the volume can be mounted read-only by many nodes
Example is one database master with, read only slaves.
– ReadWriteMany the volume can be mounted as read-write by many nodes
Used for object stores, dataware house “cold storage”, web assets and ETL.
Persistent Volumes (PV) and Volume Claims (PVC)
Persitent Volumes are a piece of storage allocated by the cluster aministrator or dynamicaly privisioned. PV are clusted wide not namespaced.
Persistent Volume claims is a request made by the user of previously allocated storage. PVCs are namespaced.
- Create a Persistent Volume File: kube-nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-0001
labels:
type: networked
spec:
storageClassName: nfs-rwonce
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: casa
path: "/nfs-pv-0001"
readOnly: false
Apply it (We are specifying a namespace to make nfs-rwonce storageclass visible to the PVC. Otherwise, we will need to define a Storageclass object).
$ kubectl -n playground apply -f kube-nfs-pv.yaml
persistentvolume/nfs-pv-0001 created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-0001 10Gi RWO Retain Bound playground/test-claim nfs-rwonce 0h30s
Tip: To mount the volume using NFSv3 replace the last 8 lines with
mountOptions:
- hard
- vers=3.0
- nfsvers=3.0
nfs:
server: casa
path: "/mnt/volume01/k3s-storage/nfs-pv-0001"
readOnly: false
- Create a VolumeClaim (PVC)
File: kube-nfs-pv.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Mi
storageClassName: nfs-rwonce
Apply it
$ kubectl -n playground apply -f kube-nfs-pvc.yaml
persistentvolumeclaim/test-claim created
$ kubectl -n playground get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Pending
- Run pod to use your new nfs volume claim
File: test-pod-nfs-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-nfs-pvc
namespace: playground
spec:
containers:
- name: busybox
image: busybox:latest
command: ["sh"]
args: ["-c", "while [ 1 ]; do echo 'Hello World'>/data/nfs-data.dat && sleep 10; done"]
resources:
limits:
cpu: 100m
memory: 50Mi
volumeMounts:
- name: test-store
mountPath: /data
volumes:
- name: test-store
persistentVolumeClaim:
claimName: test-claim
Apply it
$ kubectl -n playground apply -f test-pod-nfs-pvc.yaml
pod/test-nfs-pvc created
$ kubectl -n playground get pods
NAME READY STATUS RESTARTS AGE
test-pod-nfs-pvc 1/1 Running 0 17s
$ kubectl -n playground logs -f pod/test-nfs-pvc
total 14
drwxr-xr-x 2 nobody 42949672 3 Jul 9 00:22 .
drwxr-xr-x 1 root root 4096 Jul 9 00:30 ..
-rw-r--r-- 1 nobody 42949672 12 Jul 9 00:30 nfs-data.dat
Hello World
^C
Final thoughts
Kubernetes and NFS work very well together and provide you with reliable networked storage for most persistent workloads; you can use Cloud implementations from AWS EFS and Google Cloud Filestore.