Hello, The spark task is automatically being assig...
# ask-the-community
f
Hello, The spark task is automatically being assigned service account: spark, even if I don’t specify it in command line with:
Copy code
pyflyte register .... --service-account spark
And when the task ephemeral pod is having service account = spark, as opposed to service account = default, that task pod is not having permission to access S3. Where in the flyte-core helm charts should the fixes made? Maybe the default for this section needs to be corrected?
Copy code
- key: ad_spark_service_account
      value: |
        apiVersion: v1
        kind: ServiceAccount
        metadata:
          name: spark
          namespace: {{ namespace }}
    - key: ae_spark_role_binding
      value: |
        apiVersion: rbac.authorization.k8s.io/v1
        kind: RoleBinding
        metadata:
          name: spark-role-binding
          namespace: {{ namespace }}
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: Role
          name: spark-role
        subjects:
        - kind: ServiceAccount
          name: spark
          namespace: {{ namespace }}
I got this. The service account for spark is not having aws iam-role set, compared to the default service account.
Copy code
kubectl describe sa -n flytesnacks-development
Name:                default
Namespace:           flytesnacks-development
Labels:              <none>
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::245085526351:role/flyte-role
Image pull secrets:  artifactory-da-reader-token
Mountable secrets:   default-token-x5dbz
Tokens:              <none>
Events:              <none>


Name:                spark
Namespace:           flytesnacks-development
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
t
To configure which serviceAccount is used by default I think its
Copy code
configmap:
  adminServer:
    flyteadmin:
      k8sServiceAccount: deafult