Is there a way to access a property of the workflo...
# ask-the-community
s
Is there a way to access a property of the workflow arguments in the workflow code? When I do something like this, it throws
TypeError: 'Promise' object is not subscriptable
since flyte creates promises during packaging/deployment:
Copy code
@workflow
def wf(wf_args) -> int:
    print(wf_args["x"])
    ...
m
Since workflows are only used to create the execution DAGs you can't actually access the properties since the deserialization of the inputs only happens once they're passed into a task. So within a task you are able to call
wf_args["x"]
without issues since that's where the actual Python code runs, and the inputs are deserialized by then.
So this would allow you to access them
Copy code
@task
def task(wf_args):
    print(wf_args["x"])

@workflow
def workflow(wf_args):
    task(wf_args)
s
Right - is there any way to get the flyte execution id from the workflow code when it’s executed then?
@Maarten de Jong
I’m trying to access the execution id inside the workflow (not inside a task) - thanks by the way for your quick response
m
flytekit.current_context().execution_id
should do the trick
s
Woooo
m
Inside the workflow would still not work unfortunately
s
Nice
Oh
It would only work inside a task?
m
As far as I know yeah, but you could give it a shot, starting to doubt myself writing this
Since the execution ID isn't assigned to the DAG I don't expect it to work for the workflow
s
I’m trying to use
with
context to wrap tasks, but with that would like to send in the execution id to match my experiment run id
Do you think I’m approaching it the wrong way?
Copy code
@workflow
def wf(wf_args):
    with logger.run(run_id=1) as run:
        task1()
Since each execution is tied to a run, I think it makes sense logically, but I can’t find a way to tie flyte execution id to my run id
m
Are there multiple tasks called within your logger context manager?
s
Yes
Copy code
@workflow
def wf(wf_args):
    with logger.run(run_id=1) as run:
        task1_o = task1()
        task2_o = task2(x=task1_o)
        ...
Since the entire workflow execution is a run
Or should I put the
with
outside of the
@workflow
?
Not sure if that makes sense - my python knowledge is limited
Still not sure how to grab the flyte execution id though
m
My personal recommendation in that case would be to run everything from within a dynamic, I think then you can use the logger and the execution ID as you want
Copy code
@dynamic
def my_experiment(wf_args):
    with logger.run(run_id=flytekit.current_context().execution_id) as run:
        task1_o = task1(wf_args)
        task2_o = task2(x=task1_o)
        ...

@workflow
def wf(wf_args):
    my_experiment(wf_args)
s
Hmm ok - I need to check out what
@dynamic
does
Why does this work?
m
Dynamic is basically a workflow, but the DAG is evaluated at runtime. The downside is that you need to schedule a machine to evaluate it, the upside is that you can run a mixture of Python code and Flyte workflows/tasks
This works because the context manager is Python code that cannot be run within the DAG that the workflow creates, but since the dynamic runs on an actual machine (and isn't just a graph), it can be used for the context manager
s
Hmm ok let me read up a bit more on the docs and get back to you - not yet sure what the implication is of using this instead of a regular workflow. When you say the downside is we need a machine, I’m a bit confused because doesn’t the workflow also need a machine to evaluate anyway (since it’s just a graph)?
m
So that's where my knowledge ends a little bit, perhaps someone else can fill in and correct my mistakes, but from what I've heard here and there I believe that the workflow is a graph as kubernetes operator, so Kubernetes deals with executing that and you don't actually need to schedule a machine to execute the graph
s
OK thanks for the help! I’ll look into this a bit more
159 Views