Team, i am trying to update the spark logging link...
# announcements
s
Team, i am trying to update the spark logging link with below template uri.
Copy code
- spark.hadoop.fs.AbstractFileSystem.s3a.impl: "org.apache.hadoop.fs.s3a.S3A"
          - spark.hadoop.fs.s3a.multipart.threshold: "536870912"
          - spark.blacklist.enabled: "true"
          - spark.blacklist.timeout: "5m"
          - spark.task.maxfailures: "8"
        logs:
          mixed:
            kubernetes-enabled: true
            kubernetes-template-uri: '<https://app.datadoghq.com/logs?query=kube_namespace:{{> .namespace }} pod_name:{{ .podName }}&cols=&index=&messageDisplay=inline&stream_sort=time,desc&viz=stream&from_ts=1663061891694&to_ts=1663148291694&live=true'
          system:
            kubernetes-enabled: true
            kubernetes-template-uri: '<https://app.datadoghq.com/logs?query=kube_namespace:{{> .namespace }} pod_name:{{ .podName }}&cols=&index=&messageDisplay=inline&stream_sort=time,desc&viz=stream&from_ts=1663061891694&to_ts=1663148291694&live=true'
          all-user:
            kubernetes-enabled: true
            kubernetes-template-uri: '<https://app.datadoghq.com/logs?query=kube_namespace:{{> .namespace }} pod_name:{{ .podName }}&cols=&index=&messageDisplay=inline&stream_sort=time,desc&viz=stream&from_ts=1663061891694&to_ts=1663148291694&live=true'
however flytepropeller pod is not getting created and throwing below error. time="2022-09-14T112612Z" level=error msg="\nWhile parsing config: yaml: line 6: mapping values are not allowed in this context"
y
is there any more to the error message btw?
and which configmap is this?
d
@Sathish kumar Venkatesan I tested locally with your configuration and this seems to work fine. The 'mapping values are not allowed in this context' means that the configuration is looking for a value rather than a map in the yaml. Could you provide the entire configuration? That way we can figure out exactly what the issue is.
cc @Yuvraj
s
Were you able to verify our yaml file for datadog links?
d
@Sathish kumar Venkatesan what is the generated configmap for flytepropeller?
s
@Dan Rammer (hamersaw) i see {{ .namespace }} pod_name:{{ .podName }} is getting empty value
Copy code
spark.yaml: |
    plugins:
      spark:
        logs:
          all-user:
            cloudwatch-enabled: true
            cloudwatch-template-uri: <https://app.datadoghq.com/logs?query=kube_namespace>: pod_name:
          mixed:
            cloudwatch-enabled: true
            cloudwatch-template-uri: <https://app.datadoghq.com/logs?query=kube_namespace>: pod_name:
          system:
            cloudwatch-enabled: true
            cloudwatch-template-uri: <https://app.datadoghq.com/logs?query=kube_namespace>: pod_name:
        spark-config-default:
        - spark.hadoop.fs.s3a.aws.credentials.provider: com.amazonaws.auth.DefaultAWSCredentialsProviderChain
        - spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version: "2"
        - spark.kubernetes.allocation.batch.size: "50"
d
@Sathish kumar Venkatesan ok, we have seen this before, basically the helm chart uses the same templating for args (ie. {{ and }}) so when helm compiles the chart it replaces the Flyte log link templates as well. we should be able to fix this in two steps: (1) use escaped templating for flyte logs in the helm charts. this will correct the
.namespace
,
.podName
replacements your seeing. @Smriti Satyan did we get this documented? (2) currently you have a space in the URI between the
kube_namespace:
and
pod_name:
in yaml this is manifesting as a map rather than a value. i'm not sure if this can be quoted or if the space needs to be completely removed.
s
@Dan Rammer (hamersaw) this will be documented in a while! Sorry for the delay
d
no problem, can you link to the conversation, or just put a brief solution, regarding the fix here?
s
s
thank you all. i will try.
@Dan Rammer (hamersaw) @Smriti Satyan Thank you so much. i am able to update the link and it working fine.
158 Views