Modern Kubernetes clusters running on Amazon EKS often require shared, ReadWriteMany (RWX) capable storage for applications such as logging, CI/CD systems, caching layers, or shared configuration. While AWS offers multiple storage options, using an existing on-prem or cloud-hosted NFS server remains a simple and powerful solution when RWX support is required, with minimal operational overhead.
This guide walks you through the entire journey:
Prerequisites
Before deploying the NFS Subdir External Provisioner on EKS, we first need a fully functional NFS server. This server will act as the backend storage for all dynamically created PVC directories.
The following prerequisites ensure that both the NFS server and the EKS cluster can communicate successfully end-to-end:
This section walks through preparing a functional NFS server from a raw disk to exporting directories for Kubernetes dynamic provisioning.
The nfs-utils package provides everything needed to run an NFS server, including:
Note: These commands assume a RHEL-based distribution (Amazon Linux, Rocky Linux, CentOS, Red Hat).
For Ubuntu/Debian systems, replace this with:
NFS requires a directory to export. Most deployments use a dedicated secondary disk to avoid filling the system root volume.
List available disks
You should see something like:
For this example, we assume the target disk is: /dev/xvdb.
Kubernetes workloads require stable, persistent storage. A dedicated partition ensures isolation, easier resizing, and predictable mount points.
Run the following command:
Inside fdisk, create a single primary partition:
n → new partition
p → primary
Enter → use default partition number
Enter → use default start
Enter → use default end
w → write & exit
Resulting device:
Format the Partition
In this guide, we will use ext4, which is perfectly stable, widely supported, and fully compatible with NFS workloads. Format the partition as ext4:
Your environment can safely use ext4 because of:
If you expect very high concurrency (hundreds of pods writing simultaneously or heavy CI/CD cache workloads), you may format the disk as XFS instead:
XFS advantages:
But for most customers, ext4 is more than sufficient, easier to operate, and ideal for general-purpose NFS in Kubernetes.
Create and Mount the Storage Directory
This directory becomes the root path for all NFS-provisioned PVCs.
The external-provisioner will create subdirectories inside /pvcdata.
Make the Mount Persistent
Add this line to /etc/fstab:
Apply:
This ensures /pvcdata is automatically mounted after every reboot.
Validate Storage Permissions
NFS clients (EKS worker nodes) must be able to create folders and write files. Set appropriate permissions:
Kubernetes pods often run using random, auto-assigned, or application-defined UIDs, not matching nobody, nogroup, or root. This creates permission mismatches on NFS-backed volumes.
Because NFS does not enforce POSIX ACLs the same way local filesystems do, you may observe:
In such cases, temporarily relaxing permissions to 777 may be necessary:
Edit /etc/exports:
Add:
Instead of defining each worker node’s IP address individually, it is more efficient to authorize the entire worker node subnet (CIDR range) in the export configuration.
Meaning of the export flags:
Rw: Allows clients read/write access
Sync: Writes are committed immediately; safer for data consistency
No_subtree_check: Avoids subtree validation; improves reliability and performance
No_root_squash: Allows root on the client to act as root on the server (required for dynamic provisioning)
no_root_squash gives root-level access to all NFS clients from the allowed subnet.
This is required because:
Without no_root_squash, you would see:
It should only be used when:
Enable and Start the NFS Server
If needed, disable the firewall:
Use this only in trusted networks (VPCs, on-prem behind firewalls, VPNs). For production, selectively allow NFS ports instead of disabling the firewall completely.
This reloads the export configuration without restarting the NFS service.
Check the export list:
You should see:
Verify that the server is active and that port 2049 (NFS service port) is listening:
NFSv4+ is recommended for EKS clusters for:
Check the configuration:
Expected defaults:
We’ll now continue by installing the nfs-subdir-external-provisioner in our cluster using Helm.
Create a custom nfs-values.yaml file to define your NFS configuration, StorageClass behavior, and deployment strategy.
Below is an example configuration you can adapt:
Important fields explained:
If you use: reclaimPolicy: Delete then Kubernetes will delete the underlying NFS directory as soon as the PVC is deleted. There is no recovery unless you’ve taken an external backup. This is why Retain is strongly recommended for:
Once your nfs-values.yaml file is ready, install the provisioner using Helm:
This will deploy the provisioner, create the StorageClass, and enable dynamic PVC provisioning backed by your NFS server.
You should now see:
After installing the NFS Subdir External Provisioner, you can validate that dynamic provisioning and RWX access work correctly by deploying a simple test workload.
This test will:
Apply the following test-nfs.yaml manifest:
Important fields explained:
Then apply it:
Pick any pod and check the logs:
Each node should write its own timestamps into its own subdirectory on the NFS server.
The structure created on NFS should look like:
And the content:
Each folder contains a test.txt file with timestamps written by the corresponding node.
This verifies that:
From the NFS host:
Example output:
This confirms: