Migrate persistent volume data
In order to switch Storage Class (SC) for existing Persistent Volumes (PVs), especially by switching from the default EBS-based SC to an EFS-backed SC to eliminate availability zone lock-in, it requires migration of data residing on the existing PVs to newly-created PVs.
There are many ways to do this, but here is the example to migrate by assigning EBS and EFS to the same pod.
For data consistency, it is recommended to stop the application during the migration so that data differences do not occur.
-
Make sure you already meet prerequsities using EFS
-
Prepare created empty EFS file system
- Please make sure a mount target is configured for the subnets the worker nodes are in.
Please refer to the following sample YAML and change the contents accordingly.
It is recommended to replace <<name>>
with a unique value.
Frist, create a new EBS PV.
# vim mg-pv-<<name>>-ebs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mg-pv-<<name>>-ebs
# RENAME THIS ACCORDINGLY!
# This should match new EBS volume AZ for start you pod in same AZ
labels:
topology.kubernetes.io/region: ap-northeast-2
topology.kubernetes.io/zone: ap-northeast-2a
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
# EDIT THIS ACCORDINGLY!
# This should match the new EBS volume ID in the correct AZ
volumeID: aws://ap-northeast-2a/vol-0a59e45edd339ae3b
capacity:
storage: 8Gi
# It makes retain your EBS after you delete claim, not necessary for EBS but recommend for handy troubleshooting.
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2
volumeMode: Filesystem
Next, create a new EFS PV.
# vim mg-pv-<<name>>-efs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
# RENAME THIS ACCORDINGLY!
# This should match new EBS volume AZ for start you pod in same AZ
name: mg-pv-<<name>>-efs
spec:
accessModes:
- ReadWriteMany
csi:
driver: efs.csi.aws.com
# EDIT THIS ACCORDINGLY!
# This should match the new EFS volume ID you created
volumeHandle: fs-07a10f1beb74bdb47
capacity:
# storage value a required field in Kubernetes but, will be ignored by the Amazon EFS CSI driver
# Reference: https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html#efs-sample-app
storage: 8Gi
persistentVolumeReclaimPolicy: Retain
# EDIT THIS ACCORDINGLY!
# This should match the storage class name
storageClassName: efs-sc
volumeMode: Filesystem
Please refer to the following sample YAML and change the contents accordingly.
It is recommended to replace <<name>>
with the value which is used when you create PVs earlier.
Frist, create a new EBS PVC.
# vim mg-pvc-<<name>>-ebs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mg-pvc-<<name>>-ebs
# MODIFY THIS ACCORDINGLY!
# based on your namespace
namespace: circleci-server
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: gp2
volumeMode: Filesystem
# EDIT THIS ACCORDINGLY!
# This should be the unique name of your created PV earlier.
volumeName: mg-pv-<<name>>-ebs
Next, create a new EFS PVC.
# vim mg-pvc-<<name>>-efs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mg-pvc-<<name>>-efs
# MODIFY THIS ACCORDINGLY!
# based on your namespace
namespace: circleci-server
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: efs-sc
volumeMode: Filesystem
# EDIT THIS ACCORDINGLY!
# This should be the unique name of your created PV earlier.
volumeName: mg-pv-<<name>>-efs
Please refer to the following sample YAML and change the contents accordingly.
It is recommended to replace <<name>>
with the value which is used when you create PVs earlier.
# vim mg-pod-<<name>>.yaml
apiVersion: v1
kind: Pod
metadata:
name: migration-<<name>>
spec:
containers:
- name: mount
image: ubuntu
command:
- sleep
- infinity
volumeMounts:
- name: ebs-volume
mountPath: /ebs
- name: efs-volume
mountPath: /efs
volumes:
- name: ebs-volume
persistentVolumeClaim:
claimName: mg-pvc-<<name>>-ebs
- name: efs-volume
persistentVolumeClaim:
claimName: mg-pvc-<<name>>-efs
kubectl apply -f <file-name> -n <namespace>
Migrate data on EBS to EFS using rsync
.
Replace <<name>>
with the value which is used when you create PVs earlier.
kubectl exec -it migration-<<name>> -- apt update
kubectl exec -it migration-<<name>> -- apt -y install rsync
kubectl exec -it migration-<<name>> -- rsync -a /ebs/ /efs/
And then, check the files has been transferd.
kubectl exec -it migration-<<name>> -- ls -al /efs
This procedure will leave 5 resources (2pv, 2pvc and 1pod).
Delete them after the migration is successfully completed.
(If you created PVs with persistentVolumeReclaimPolicy: Retain
, the EBS volumes will remain on your AWS)
kubectl delete <<resource-name>> -n <namespace>