Hi, this is probably more of kubernetes question, ...
# ask-the-community
b
Hi, this is probably more of kubernetes question, but maybe somebody experienced it: We would like to run flyte tasks that are mounted and accessing an NFS in the started pod. My current approach was to fill in the
podspec
of a task with e.g.
V1NFSVolumeSource
to mount the NFS to the container. While that generally works, I'm running into permission issues with the flytekit docker image (most likely because it's starting with the flytekit user). When I'm trying to access the mounted NFS path either with python and linux commands I run into permission issues. And that also makes sense looking at the file system (data being the nfs mount):
Copy code
d---------   8 root     root     4096 Jan 11 13:48 data
drwxr-xr-x   1 flytekit flytekit 4096 Sep  3  2022 home
The NFS definitely works as I was able to mount and access it over some dummy nginx container. My last attempt was to let kubernetes adjust the permissions properly by using
securityContext
with
fsGroup: 1000
(which should be the flytekit group), but no difference. Happy for any tips! And I definitely can provide more details if necessary.
Just dropping a relevant related issue: https://github.com/kubernetes/examples/issues/260
From there one option would be to use an initcontainer to adjust the permissions using a root user properly
d
@Broder Peters so what's happening here is that even if you set the fsGroup this not reflected in the actual mount?
can you check the config of your CSI driver? According to this KEP (which went GA on K8s 1.23), CSIDrivers will probably by default not allow permission changes. There are some conditions to enable that behavior This is a good reference for that: https://kubernetes-csi.github.io/docs/support-fsgroup.html
for example this is what I get on an EKS cluster:
Copy code
k describe csidriver <http://efs.csi.aws.com|efs.csi.aws.com>   
                                                                                          
Name:         <http://efs.csi.aws.com|efs.csi.aws.com>
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  <http://storage.k8s.io/v1|storage.k8s.io/v1>
Kind:         CSIDriver
Metadata:
  Creation Timestamp:  2023-12-07T11:24:19Z
  Resource Version:    273
  UID:                 92832eb0-c7c2-4d93-9bcd-70af57192b5e
Spec:
  Attach Required:     false
  Fs Group Policy:     ReadWriteOnceWithFSType
  Pod Info On Mount:   false
  Requires Republish:  false
  Storage Capacity:    false
  Volume Lifecycle Modes:
    Persistent
As I haven't added
fstype
to the spec, probably whatever I set in the
podSecurityContext
will be ignored
b
so what's happening here is that even if you set the fsGroup this not reflected in the actual mount?
Exactly.
I'm not really deep in the CSI stuff yet, but from what I understand the NFS bit is not utilizing a CSI driver as I'm using the "native"
V1NFSVolumeSource
. This is the only one I have:
Copy code
āžœ  ~ k get csidriver   
NAME            ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES       AGE
<http://csi.tigera.io|csi.tigera.io>   true             true             false             <unset>         false               Ephemeral   22d
āžœ  ~ k describe csidriver <http://csi.tigera.io|csi.tigera.io>
Name:         <http://csi.tigera.io|csi.tigera.io>
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  <http://storage.k8s.io/v1|storage.k8s.io/v1>
Kind:         CSIDriver
Metadata:
  Creation Timestamp:  2023-12-20T09:47:02Z
  Owner References:
    API Version:           <http://operator.tigera.io/v1|operator.tigera.io/v1>
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Installation
    Name:                  default
    UID:                   7a335290-5260-4ac7-95f9-3ad30944954e
  Resource Version:        6562
  UID:                     6b40282e-3aec-49f8-b549-f4518403026d
Spec:
  Attach Required:     true
  Fs Group Policy:     ReadWriteOnceWithFSType
  Pod Info On Mount:   true
  Requires Republish:  false
  Se Linux Mount:      false
  Storage Capacity:    false
  Volume Lifecycle Modes:
    Ephemeral
Events:  <none>
An alternative might be to use https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner to mount the NFS over storage class, PVCs and PVs. But on the way of setting it up I say that k8s has that native NFSVolumeSource and got curious why I shouldn't be using that one.
d
yeah, dynamic might be better. Using an initContainer would add latency to task executions I guess
b
Will give it a look. Thanks for the input!