Can anyone give me some pointers for mounting EFS ...
# ask-the-community
b
Can anyone give me some pointers for mounting EFS (shared storage) to the flyte task containers? I have a StorageClass and PersistentVolume setup, and now am trying to get the PersistentVolumeClaim created by using the
cluster_resource_manager
templates. My template looks like this, but I never see a pvc created when tasks are running...
Copy code
- key: ac_project_efs_shared
      value: |
        apiVersion: v1
        kind: PersistentVolumeClaim
        metadata:
          name: efs-shared
          namespace: {{ namespace }}
        spec:
          accessModes:
            - ReadWriteMany
          storageClassName: efs-sc
          resources:
            requests:
              storage: 5Gi
          volumeName: efs-shared
y
what are you trying to do?
can you check to see that the pvc exists in each namespace?
who will create the pv?
if you’re trying to create pod tasks you can specify all the volumes and mounts in the pod spec
b
what are you trying to do?
I am trying to get Flyte to auto create the pvc for each namespace (flyte project). I read in a few posts here and in github that using the templates in the
cluster_resource_manager
should help with that.
can you check to see that the pvc exists in each namespace?
No pvc's exist
Copy code
$ kubectl get pvc --all-namespaces
No resources found
who will create the pv?
I want the resource template to create it (and a pvc) when a new project is created, as a pv can only have one pvc
if you’re trying to create pod tasks you can specify all the volumes and mounts in the pod spec
If I was creating my own pod then yes that is what I would do. But what I want is the pod that flyte creates to run the workflow tasks for a project, to use the project pvc to mount the EFS drive. Since the pvc is not created perhaps I am applying the template change incorrectly. I have a git clone of the flyte repo. I edited
/charts/flyte-core/values-eks.yaml
and added the template from this line... https://github.com/flyteorg/flyte/blob/master/charts/flyte-core/values-eks.yaml#L396 I then ran this command to deploy the change from the root of the cloned repo
Copy code
helm upgrade flyte-backend ./charts/flyte-binary --namespace flyte --values ~/mysettings/eks-starter.yaml
Does this all look correct so far?
I can mount the EFS volume in the
flyte-backend-flyte-binary
pod, by manually creating a pv & pvc for the
flyte
namespace and adding this to my
eks-starter.yaml
. So the EFS setup is correct.
Copy code
deployment:
  extraVolumeMounts:
    - mountPath: /shared
      name: efs-shared
  extraVolumes:
    - name: efs-shared
      persistentVolumeClaim:
        claimName: efs-shared
But it is not clear to me how to get the flyte task pods in
flytesnacks-development
, and any other flyte project , to mount the EFS. Apparently
cluster_resource_manager
templates
will do it, but it is not working for me. I don't see it create the pv & pvc let alone see the pod mount the volume
I can see the configmap is populated
Copy code
$ kubectl describe configmaps flyte-backend-flyte-binary-cluster-resource-templates -n flyte
Name:         flyte-backend-flyte-binary-cluster-resource-templates
Namespace:    flyte
Labels:       <http://app.kubernetes.io/instance=flyte-backend|app.kubernetes.io/instance=flyte-backend>
              <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
              <http://app.kubernetes.io/name=flyte-binary|app.kubernetes.io/name=flyte-binary>
              <http://app.kubernetes.io/version=1.16.0|app.kubernetes.io/version=1.16.0>
              <http://helm.sh/chart=flyte-binary-v0.1.10|helm.sh/chart=flyte-binary-v0.1.10>
Annotations:  <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: flyte-backend
              <http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: flyte

Data
====
001_namespace.yaml:
----
apiVersion: v1
kind: Namespace
metadata:
  name: '{{ namespace }}'

101_project_efs_shared_volume.yaml:
----
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ namespace }}_efs-shared
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: <http://efs.csi.aws.com|efs.csi.aws.com>
    volumeHandle: fs-xxxx:/shared

102_project_efs_shared.yaml:
----
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ namespace }}_efs-shared
  namespace: {{ namespace }}
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
  volumeName: {{ namespace }}_efs-shared


BinaryData
====

Events:  <none>
I was able to manually setup the pv & pvc for the project
flytesnacks-development
, then set a
PodTemplate
to mount the pvc volume (as per https://docs.flyte.org/en/latest/deployment/configuration/general.html#using-default-k8s-podtemplates ) However I still can't get the
cluster_resource_manager
templates to automatically provision a pv & pvc when a new project is created, any suggestions?
y
can you check the logs in the cluster resource manager pod?
and check for permissions, make sure that the sa that the cluster manager runs with has perms to do these things
i don’t necessarily see anything wrong. look through the logs, make sure perms are there, make sure the cluster manager has picked up the new configmap, make sure it’s mounted correctly. there should at least be logs saying what’s happening or not happening.
b
Finally got it working after finding the cluster resource logs in the AWS EKS console. It came down to configuring rbac in my values.yaml to enable permissions not only for persistentvolumes & persistentvolumeclaims, but also for podtemplates
Thank you for your help 🙂
157 Views