Replace persistent volume to EFS
CircleCI server which runs on AWS requires persistent volumes and it is built with EBS volume by default. However, EBS relies on one of the AWS availability zones in the same region which makes service down in case of availability zone level failure.
This article walks through an operation to replace an EBS-backed persistent volume with an EFS-backed one. Specifically, this article examines a strategy that enables any pod which is using persistent volume to be EFS-backed. The goal of this operation is to reduce the impacts of availability zone level failures aforementioned.
-
Before replacing PV to EFS, follow the AWS documentation and make sure you have EFS CSI driver as a storage class.
$ kubectl get sc # Example Outputs NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ebs-sc ebs.csi.aws.com Delete WaitForFirstConsumer false 25d efs-sc efs.csi.aws.com Delete WaitForFirstConsumer false 18d gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 28d
-
(optional) If you are already using EBS as a persistent volume, take a snapshoot and migrate it first. Note: This operation should be conducted after deleting StatefuleSet, but before deleting PVC, in the step 1 discussed below. This will make sure that the migration copies the newest data, and the data to be migrated is consistent.
- Here is the sample way to migrate data EBS to EFS
Next, delete the StatefulSet and PVC.
# delete the Statefulset
$ kubectl -n <namespace> delete sts/<<StatefulSetName>>
# delete the PVC
$ kubectl -n <namespace> delete pvc/<<PersistentVolumeClaimNAME>>
Next, create a new PV. Please refer to the following sample YAML and change the contents.
spec.capacity.storage
value a required field in Kubernetes but, will be ignored by the Amazon EFS CSI driver- (optional)
spec.persistentVolumeReclaimPolicy: Retain
is recommended to keep EFS alive.
apiVersion: v1
kind: PersistentVolume
metadata:
# RENAME THIS ACCORDINGLY!
# This should match new EBS volume AZ for start you pod in same AZ
name: pv-mongo-efs
spec:
accessModes:
- ReadWriteMany
csi:
driver: efs.csi.aws.com
# EDIT THIS ACCORDINGLY!
# This should match the new EFS volume ID in the correct AZ
volumeHandle: fs-03a913f6bddd3b4e2
capacity:
storage: 8Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
Please refer to the following sample YAML and change the contents.
Each file is created from default EBS setting version 4.0.0, so there is no guarantee that settings are always valid. Please create/modify accordingly.
pvc-mongo-efs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/component: mongodb
app.kubernetes.io/instance: circleci-server
app.kubernetes.io/name: mongodb
name: datadir-mongodb-0
# MODIFY THIS ACCORDINGLY!
# based on your namespace
namespace: circleci-server
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: efs-sc
volumeMode: Filesystem
# EDIT THIS ACCORDINGLY!
# This should be the unique name of your created PV earlier.
volumeName: pv-mongo-efs
pvc-vault-efs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: vault
layer: data
name: data-vault-0
# MODIFY THIS ACCORDINGLY!
# based on your namespace
namespace: circleci-server
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: efs-sc
volumeMode: Filesystem
# EDIT THIS ACCORDINGLY!
# This should be the unique name of your created PV earlier.
volumeName: pv-vault-efs
pvc-rabbitmq-efs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/instance: circleci-server
app.kubernetes.io/name: rabbitmq
name: data-rabbitmq-0
# MODIFY THIS ACCORDINGLY!
# based on your namespace
namespace: circleci-server
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: efs-sc
volumeMode: Filesystem
# EDIT THIS ACCORDINGLY!
# This should be the unique name of your created PV earlier.
volumeName: pv-rabbitmq-efs
PVC setup might take a little time. When the PVC status becomes Bound, the PVC setup is complete.
$ kubectl -n <namespace> get pvc/datadir-mongodb-0
To use mongodb runs on EFS, You should enable init container that change the owner and group. Reference: https://artifacthub.io/packages/helm/bitnami/mongodb#volume-permissions-parameters
Please add volumePermissions.enabled
parameters on your values.yaml
file.
# values.yaml
mongodb:
auth:
rootPassword: ******
password: ******
volumePermissions:
enabled: true
To re-create the StatefulSet, press the Deploy button once again in the server’s KOTS admin console.
If you want to try this on server4, re-deploy the current version with helm upgrade
.
You can check the StatefulSet with the following command.
$ kubectl -n <namespace> get sts/mongodb
# Example Outputs
NAME READY AGE
mongodb 1/1 140m