https://flyte.org logo
#announcements
Title
# announcements
h

Hafsa Junaid

03/26/2022, 2:55 PM
@Yuvraj Propeller workload pod is still crashing. Annotation with a namespace is done. Also, the actual problem statement is still there. As I need to access external IP for UI services. @Yee
y

Yuvraj

03/26/2022, 2:56 PM
Nice, Can you paste your propeller logs ?
h

Hafsa Junaid

03/26/2022, 3:16 PM
message has been deleted
y

Yuvraj

03/26/2022, 3:18 PM
Cool, I saw this before. In your helm values file remove spark from here https://github.com/flyteorg/flyte/blob/master/charts/flyte-core/values-gcp.yaml#L279-L288 and then re deploy the application
cc: @Dan Rammer (hamersaw) Any idea ? What is the issue with spark plugin ?
h

Hafsa Junaid

03/26/2022, 3:32 PM
Re-deployed! After commenting
message has been deleted
y

Yuvraj

03/26/2022, 3:34 PM
what is the logs ?
h

Hafsa Junaid

03/26/2022, 3:35 PM
1. enabled-plugins: 2. - container 3. - sidecar 4. # - spark 5. - k8s-array 6. default-for-task-types: 7. container: container 8. sidecar: sidecar 9. spark: spark 10. container_array: k8s-array Line # 4 is the required change right?
y

Yuvraj

03/26/2022, 3:36 PM
You also need to comment out line 9
h

Hafsa Junaid

03/26/2022, 3:42 PM
It's done. Thanks, @Yuvraj But the external IP is still not accessible.
🎉 1
y

Yuvraj

03/26/2022, 4:22 PM
Can you post the ingress spec ?
Did you setup the DNS ?
Also paste the output of
kubectl get ManagedCertificate
, Did you install the cert manager ?
h

Hafsa Junaid

03/26/2022, 4:26 PM
I have the domain name but mapping IP to requires external IP but we don't have that for flyteconsole. (as per my understanding)
message has been deleted
I created the SslCert client certificate
for database instance
y

Yuvraj

03/26/2022, 5:02 PM
do you think maybe we can discuss this on a google meet ?
Looks like your certificates are wrong
Copy code
➜  ~ curl -L --header "Host: <http://flyte.openaimp.com|flyte.openaimp.com>"  <https://34.70.204.255> -v
*   Trying 34.70.204.255:443...
* Connected to 34.70.204.255 (34.70.204.255) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: <https://curl.se/docs/sslcerts.html>

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I also didn’t find any A record for your dns
h

Hafsa Junaid

03/26/2022, 5:40 PM
Sure we can meet, I really need to understand this.
y

Yuvraj

03/26/2022, 5:43 PM
@Haytham Abuelfutuh Do we need any change in spark config for GCP ?
Copy code
sparkoperator:
  enabled: true
  plugin_config:
    plugins:
      spark:
        # -- Spark default configuration
        spark-config-default:
          # We override the default credentials chain provider for Hadoop so that
          # it can use the serviceAccount based IAM role or ec2 metadata based.
          # This is more in line with how AWS works
          - spark.hadoop.fs.s3a.aws.credentials.provider: "com.amazonaws.auth.DefaultAWSCredentialsProviderChain"
          - spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version: "2"
          - spark.kubernetes.allocation.batch.size: "50"
          - spark.hadoop.fs.s3a.acl.default: "BucketOwnerFullControl"
          - spark.hadoop.fs.s3n.impl: "org.apache.hadoop.fs.s3a.S3AFileSystem"
          - spark.hadoop.fs.AbstractFileSystem.s3n.impl: "org.apache.hadoop.fs.s3a.S3A"
          - spark.hadoop.fs.s3.impl: "org.apache.hadoop.fs.s3a.S3AFileSystem"
          - spark.hadoop.fs.AbstractFileSystem.s3.impl: "org.apache.hadoop.fs.s3a.S3A"
          - spark.hadoop.fs.s3a.impl: "org.apache.hadoop.fs.s3a.S3AFileSystem"
          - spark.hadoop.fs.AbstractFileSystem.s3a.impl: "org.apache.hadoop.fs.s3a.S3A"
          - spark.hadoop.fs.s3a.multipart.threshold: "536870912"
          - spark.blacklist.enabled: "true"
          - spark.blacklist.timeout: "5m"
          - spark.task.maxfailures: "8"
s

Sören Brunk

03/28/2022, 5:39 AM
Google managed certificates,
kubectl get <http://managedcertificates.networking.gke.io|managedcertificates.networking.gke.io>
only work with the GKE ingress. If you're using another ingress controller (like nginx, contour) with cert-manager you will have a different type of certificate:
kubectl get <http://certificates.cert-manager.io|certificates.cert-manager.io>
🙌 1
6 Views