GitHub
09/03/2023, 12:40 AMGitHub
09/03/2023, 12:40 AMGitHub
09/03/2023, 12:40 AMGitHub
09/03/2023, 12:40 AMGitHub
09/03/2023, 7:11 AMdelete Terminates/deletes various Flyte resources such as tasks, workflows, launch plans, executions, and projects.
But
Usage:
flytectl delete [command]
Available Commands:
cluster-resource-attribute Deletes matchable resources of cluster attributes.
execution Terminates/deletes execution resources.
execution-cluster-label Deletes matchable resources of execution cluster label.
execution-queue-attribute Deletes matchable resources of execution queue attributes.
plugin-override Deletes matchable resources of plugin overrides.
task-resource-attribute Deletes matchable resources of task attributes.
workflow-execution-config Deletes matchable resources of workflow execution config.
For example projects cannot be deleted but archived/activated.
It was mentioned in #1619 as doc bug but not fixed yet.
Expected behavior
A more accurate delete
command help message.
Additional context to reproduce
1. flytectl
2. flytectl delete --help
Screenshots
No response
Are you sure this issue hasn't been raised already?
☑︎ Yes
Have you read the Code of Conduct?
☑︎ Yes
flyteorg/flyteGitHub
09/03/2023, 8:38 PMGitHub
09/04/2023, 12:37 AMGitHub
09/04/2023, 12:37 AMGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AM{
"json": {
"exec_id": "grs2u5m2ns",
"node": "n0",
"ns": "flytesnacks-development",
"res_ver": "137261245",
"routine": "worker-2",
"src": "handler.go:313",
"tasktype": "python-task",
"wf": "flytesnacks:development:<http://core.control_flow.dynamics.wf|core.control_flow.dynamics.wf>"
},
"level": "warning",
"msg": "No plugin found for Handler-type [python-task], defaulting to [container]",
"ts": "2021-09-10T15:53:28Z"
}
There should be a document that explains how to filter logs to get only the logs for one specific workflow / execution id etc
flyteorg/flyteGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AM@flytekit.task(
retries=1,
requests=flytekit.Resources(cpu="12", mem="32Gi"),
limits=flytekit.Resources(mem="64Gi", cpu="12"),
temp_disk_volumes=[flytekit.Volume(size="100G", mount_path="/tmpdisk_01")],
)
def foo():
download_large_file_to("/tmpdisk_01/abc")
Describe alternatives you've considered
Maybe flyte users can do it today with Sidecar task. But having a support of this parameter in every task would bring more convenience.
[Optional] Propose: Link/Inline OR Additional context
If you have ideas about the implementation please propose the change. If inline keep it short, if larger then you link to an external document.
flyteorg/flyteGitHub
09/04/2023, 12:38 AMflytekit-resource-monitoring
that can load components that can output certain meta outputs.
For this, we will need support for meta-outputs.
flyteorg/flyteGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AMflytectl sandbox status
Provide a possible output or ux example
$ flytectl sandbox status
...
To Connect to FlyteSandbox Kube cluster export:
...
To connect to FlyteSandbox using flytectl export:
...
flyteorg/flyteGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AMflytectl
inside a Gitlab CI pipeline. Gitlab supports supplying CI variables mounted as files, where the environment variable points to a file containing the value of the CI var - so naturally we tried setting FLYTECTL_CONFIG
through this mechanism.
However, this fails with the following error message when trying to execute any flytectl command:
Error:
Unsupported Config Type ""
ERRO[0000]
Unsupported Config Type ""
Apparently, this error message originates in the spf13/viper library used by flytectl through flytestdlib. The error is raised because viper tries to detect the type of the config file from its extension, which is unset in our CI environment (Gitlab names the file identical to the variable name, e.g. the FLYTECTL_CONFIG
variable will be stored under /builds/resources/repo_name.tmp/FLYTECTL_CONFIG
).
The error is not specific to Gitlab CI, but can be reproduced by pointing FLYTECTL_CONFIG
at any file without an extension, see below.
Steps to reproduce
# Note that ./config is a valid YAML flytectl config file
$ export FLYTECTL_CONFIG=$PWD/config
$ cat $FLYTYCTL_CONFIG
admin:
# For GRPC endpoints you might want to use dns:///flyte.myexample.com
endpoint: dns:///localhost:30081
insecure: true
logger:
show-source: true
level: 0
storage:
connection:
access-key: minio
auth-type: accesskey
disable-ssl: true
endpoint: <http://10.32.16.105:30084>
region: us-east-1
secret-key: miniostorage
type: minio
container: "my-s3-bucket"
enable-multicontainer: true
$ flytectl get projects
Error:
Unsupported Config Type ""
ERRO[0000]
Unsupported Config Type ""
$ flytectl version
A new release of flytectl is available: 0.3.4 → v0.3.4
{
"App": "flytectl",
"Build": "8ba75a6",
"Version": "0.3.4",
"BuildTime": "2021-10-01 15:37:24.613781861 +0200 CEST m=+0.029412676"
}
Proposed fix
Viper offers the SetConfigName
and SetConfigType
methods which could be used in flytestdlib instead of SetConfigFile
if the config file is always assumed to be a YAML file.
flyteorg/flyteGitHub
09/04/2023, 12:38 AMflytectl version
reports that a new release is available, even directly after flytectl upgrade
$ flytectl upgrade
You have already latest version of flytectl
$ flytectl version
A new release of flytectl is available: 0.3.4 → v0.3.4
{
"App": "flytectl",
"Build": "8ba75a6",
"Version": "0.3.4",
"BuildTime": "2021-10-01 16:08:44.983660372 +0200 CEST m=+0.019362263"
}
Minor nitpick: the flytectl upgrade
error message contains a typo (missing "the")
It seems that the version number specified in the build (used as stdlibversion.Version
in <cmd/version/version.go|`cmd/version/version.go`> incorrectly omits the v
of the Git tag for the release. This leads to IsVersionGreaterThan
in <pkg/util/util.go|`pkg/util/util.go`> incorrectly reporting that the latest Github version is greater than the local build version.
I haven't fully grasped the release process, but it seems that the Github release action sets the version number without the v
prefix in the Goreleaser action, which would explain the discrepancy.
Expected behavior
No newer release should be reported after upgrading to the latest version using flytectl upgrade
.
[Optional] Additional context
To Reproduce
1. Upgrade to latest release using flytectl upgrade
2. Perform update check through flytectl version
flyteorg/flyteGitHub
09/04/2023, 12:38 AMError: please check your Storage Config. It failed while uploading the source code. initContainer is required even with `enable-multicontainer`
This in no way indicates that the source code artifacts are missing from the serialization stage.
Expected behavior
Clear error message when there are no serialized pb files to register
[Optional] Additional context
To Reproduce
Steps to reproduce the behavior:
1.
2.
Screenshots
If applicable, add screenshots to help explain your problem.
flyteorg/flyteGitHub
09/04/2023, 12:38 AMtask = BlazingSQL(
input_schema=FlyteSchema,
outputs=...,
query=""" SELECT count(*) FROM taxi GROUP BY year(key)""",
use_gpus=False, # True by default
)
Type of Plugin
☑︎ Python/Java interface only plugin
☐ Web Service (e.g. AWS Sagemaker, GCP DataFlow, Qubole etc...)
☐ Kubernetes Operator (e.g. TfOperator, SparkOperator, FlinkK8sOperator, etc...)
☐ Customized Plugin using native kubernetes constructs
☐ Other
flyteorg/flyteGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AM@task(task_config=Pod(pod_spec=pod_spec, pimary_container_name="container_name"))
. This means that users need to know the configuration of the cluster ( like will any sidecars be added automatically) in order to successfully create/run a flyte workflow.
Goal: What should the final outcome look like, ideally?
Ideally a user of flyte wouldn't need to declare their task as a pod task in order to run it in a kubernetes cluster that uses mutating webhooks to add sidecars to pods. The user could declare their task like @task
and it would run the flyte task pod until the flyte task container finishes then the rest of the workflow would be run.
Describe alternatives you've considered
The only other alternative I've considered is continuing to use the workaround of declaring all workflows/tasks as pod tasks.
flyteorg/flyteGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AMGitHub
09/04/2023, 12:38 AMdefault-annotations:
- annotationKey1: annotationValue1_{PROJECT}_{DOMAIN} # -> annotationValue1_flytesnacks_development
- key_{project}: True # -> key_flytesnacks: True
The idea would be to use predefined variable names (PROJECT
, DOMAIN
, `WORKFLOW`(?) ) that are anyway readily available when creating task pods.
Describe alternatives you've considered
The alternative would be to allow defining project/domain specific keys/values, which would allow more fine-grain control (maybe all projects don't need the key/value defined?) but at the same time would be less flexible with regards to including new projects. It basically serves a slightly different use case.
Propose: Link/Inline OR Additional context
No response
Are you sure this issue hasn't been raised already?
☑︎ Yes
Have you read the Code of Conduct?
☑︎ Yes
flyteorg/flyteGitHub
09/04/2023, 12:38 AM