Trying to debug some odd behavior. I have two sepa...
# ask-the-community
k
Trying to debug some odd behavior. I have two separate environments with the same propeller configmap deployed, but the Console UI is showing different logging links. One is showing what I configured in the config map, but the other is showing the default settings. I pulled the configmaps using
kubectl
but they're the same. Is this correct configuration for updating log links in propeller? Do I need to restart anything other than propeller to update the log links?
Copy code
task_logs.yaml: | 
    plugins:
      logs:
        cloudwatch-enabled: false
        kubernetes-enabled: false
        templates:
          - displayName: "Data Dog"
            templateUris:
              - "<https://app.datadoghq.com/logs?query=pod_name%3A{{> .podName }}%20kube_namespace%3A{{ .namespace }}%20&cols=&index=&messageDisplay=inline&stream_sort=desc&viz=stream&from_ts={{ .podUnixStartTime }}000&to_ts={{ .podUnixFinishTime }}000&live=false"
g
Are you looking back at the log links for tasks in old executions or starting a new workflow execution to see the results of this config? I think the log link template is evaluated at some point during a tasks execution and the resulting string is stored, so you won't see the results of the current config until a new task executes.
k
I'm starting new ones to test
g
In that case, I don't know. I thought restarting propeller would be sufficient.
k
Does anything else impact these log llinks, like log level? Although I haven't changed those between the two environments from the default
d
@Katrina P this is very odd. You are correct, updating the configmap and the restarting FlytePropeller - and you shouldn't actually need to restart propeller anyways it should automatically detect the updates and apply them.
What log level are you referring to?
k
Ah, disregard my mention of the log level; the log level config I was looking at is for admin/datacatalog so it shouldn't impact that
I redeployed my configmap again to update a new log link and now my spark tasks aren't displaying the right log links now 😞 I have no idea why (They were before the change) 🤔
Spark should be using the same default log configuration as defined in
task_logs.yaml
should it not?
But unfortunately I get different log links across those two tasks
Python task:
SparkTask:
d
So the spark plugin seems to have it's own separate log configuration.
k
I see. I was reading the documentation
Spark Plugin uses the same default log configuration as explained in Configuring Logging Links in UI.
to mean that it used the task_logs.yaml config
d
🤦 we need to fix that. i would have thought the same thing.
k
Yeah I read that section mentioning separating the user logs and system logs as optional; but that if not configured it would use the default
d
oh sure, looking through the code it looks like you get the logs that are configured. by default these are mixed k8s logs. so you should be able to modify that configuration and get the logs how you want them. does that seem reasonable?
k
Yeah, okay, good. I think the documentation is fine then
158 Views