Hi all, I encountered the problem that the Spark D...
# ask-the-community
c
Hi all, I encountered the problem that the Spark Driver pod log cannot be displayed on the console. General
Python Task
pod log can work, but the
Spark Task
cannot.
This is the Helm values.yaml file that has been deployed. I setting
logs
in spark plugin, but its not work
Copy code
logs:
            mixed:
              kubernetes-enabled: true
              kubernetes-url: |-
                <http://10.233.112.73/#/log/{{.namespace> }}/{{ .podName }}/pod?namespace={{ .namespace }}
viewing spark logs setting in the flyte-flyte-sandbox-config configmap
k
format is wrong. should be
Copy code
<http://10.233.112.73:30082/#!/log/{{> .namespace }}/{{ .podName }}/pod?namespace={{ .namespace }}
c
got it, I try it.
Is this setting correct in the Helm values ​​file?
Copy code
plugins:
        k8s:
          default-env-vars:
            - FLYTE_AWS_ENDPOINT: http://{{ printf "%s-minio" .Release.Name | trunc 63 | trimSuffix "-" }}.{{ .Release.Namespace }}:9000
            - FLYTE_AWS_ACCESS_KEY_ID: minio
            - FLYTE_AWS_SECRET_ACCESS_KEY: miniostorage
        spark:
          spark-config-default:
            - spark.driver.cores: "1"
            - spark.hadoop.fs.s3a.aws.credentials.provider: "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
            - spark.hadoop.fs.s3a.endpoint: http://{{ printf "%s-minio" .Release.Name | trunc 63 | trimSuffix "-" }}.{{ .Release.Namespace }}:9000
            - spark.hadoop.fs.s3a.access.key: "minio"
            - spark.hadoop.fs.s3a.secret.key: "miniostorage"
            - spark.hadoop.fs.s3a.path.style.access: "true"
            - spark.kubernetes.allocation.batch.size: "50"
            - spark.hadoop.fs.s3a.acl.default: "BucketOwnerFullControl"
            - spark.hadoop.fs.s3n.impl: "org.apache.hadoop.fs.s3a.S3AFileSystem"
            - spark.hadoop.fs.AbstractFileSystem.s3n.impl: "org.apache.hadoop.fs.s3a.S3A"
            - spark.hadoop.fs.s3.impl: "org.apache.hadoop.fs.s3a.S3AFileSystem"
            - spark.hadoop.fs.AbstractFileSystem.s3.impl: "org.apache.hadoop.fs.s3a.S3A"
            - spark.hadoop.fs.s3a.impl: "org.apache.hadoop.fs.s3a.S3AFileSystem"
            - spark.hadoop.fs.AbstractFileSystem.s3a.impl: "org.apache.hadoop.fs.s3a.S3A"
          logs:
            mixed:
              kubernetes-enabled: true
              kubernetes-url: |-
                <http://10.233.112.73:30082/#!/log/{{> .namespace }}/{{ .podName }}/pod?namespace={{ .namespace }}
same result
Screenshot 2024-02-05 at 3.55.58 PM.png
I manually adjusted the URL and found the correct result
k
seems like helm can’t parse
{{}}
correctly
could you edit the config directly, and see if that works?
Copy code
kubectl edit cm flyte-flyte-sandbox-config
c
ok , I try it
updated and
k -n flyte rollout restart deployment flyte-flyte-sandbox
same url result
<http://localhost:30082/#!/log/flytesnacks-development/a9wqtmj775jsbgvwwfdc-n0-0-driver/pod?namespace=flytesnacks-development>
I re-launch a workflow (new execution id*)*
k
mind creating an issue here? [flyte-bug]
k
I’ll take a look tomorrow morning
c
ok, thx