<Release - v1.14.0> New release published by <eapo...
# flytekit
c
Release - v1.14.0 New release published by eapolinario Flytekit 1.14.0 release notes Added Introducing native support for Python dictionaries, dataclass, and Pydantic BaseModel (#2760) Flyte uses JSON strings inside a Protobuf struct to serialize certain data structures, but it has posed some challenges. For example, if you tried to pass a Python dict with int elements, those were converted to float, the type supported by Protobuf’s struct. This led users to write boilerplate code or implement other suboptimal solutions to circumvent these limitations. With this release, flytekit adopts by default the MessagePack format to serialize dataclass, Pydantic’s BaseModel, and Python’s dict: Before
Copy code
@task
def t1() -> dict:
  ...
  return {"a": 1} # Protobuf Struct {"a": 1.0}

@task
def t2(d: dict):
  print(d["a"]) # this prints 1.0
After:
Copy code
@task
def t1() -> dict: # Literal(scalar=Scalar(binary=Binary(value=b'msgpack_bytes', tag="msgpack")))
  ...
  return {"a": 1}  # Protobuf Binary value=b'\x81\xa1a\x01', produced by msgpack

@task
def t2(d: dict):
  print(d["a"]) # this prints 1
Warning: This change is backwards-compatible only. It means that tasks registered with flytekit>=1.14 can use results produced by older versions of flytekit but not vice-versa unless you set the FLYTE_USE_OLD_DC_FORMAT environment variable to true. If you try to reference flytekit>=1.14 tasks from flytekit<1.14 downstream tasks, you will get a TypeTransformerFailedError
To experience the benefits of this change, upgrading flytekit to 1.14 is not enough. You have to also upgrade the flyte backend to 1.14.
By introducing a new serialization format (which you will see in the Flyte console as msgpack), Flyte enables you to leverage robust data structures without writing glue code or sacrificing accuracy or reliability.
Integration with Jupyter Notebooks (#5907)
Jupyter notebook support Now you can consume Flyte from a Jupyter Notebook without recurring to any plugin. Using FlyteRemote, Flyte will automatically detect your requests coming from a Notebook environment and execute accordingly, giving Notebook’s users access to execution outputs, versioning, reproducibility, and all the infrastructure abstractions that Flyte provides. Learn how it works in this blog by main contributor @mecoli129.
Currently, only @dynamic workflows are not supported on Notebooks. This is a planned enhancement as part of the improved eager mode, coming out early next year.
Flyte now leverages asyncio to speed up executions (#2829) Both the type engine and the data persistence layer have been updated to support asynchronous, non-blocking I/O operations. These changes aim to improve performance and scalability of I/O-bound operations. Examples include tasks that return large lists of FlyteFiles, which used to be serialized in batches but now benefit from better performance without any code changes. Changed Offloading of literals (#2872) Flyte automates data movement between tasks using gRPC as the communication protocol. When users need to move large amounts of data or use MapTasks that produce a large literal collection output, they typically hit a limit in the payload size gRPC can handle, getting an error like the following: [LIMIT_EXCEEDED] limit exceeded. 2.903926mb > 2mb This has forced users to split up MapTasks, refactoring their workflows to offload outputs to a FlyteFile or FlyteDirectory rather than returning literal values directly, or bumping up the storage.limits.maxDownloadMBs parameter to arbitrary sizes, leading to inconvenient or hard-to-maintain solutions. For example, before upgrading flytekit, a simple workflow like the following:
Copy code
@task
def print_arrays(arr1: str) -> None:
    print(f"Array 1: {arr1}")

@task
def increase_size_of_arrays(n: int) -> str:
    arr1 = 'a' * n * 1024
    return arr1

# Workflow: Orchestrate the tasks
@fl.workflow
def simple_pipeline(n: int) -> int:
    arr1 = increase_size_of_arrays(n=n)
    print_arrays(arr1)
    return 2
if __name__ == "__main__":
    print(f"Running simple_pipeline() {simple_pipeline(n=11000)}")
Fails with the following message:
Copy code
output is too large [11264029] bytes, max allowed [2097152] bytes
flytekit >=1.14 automatically offloads to blob storage any object larger than 10Mb (the gRPC limit) allowing you to manage larger data and achieve higher degrees of parallelism effortlessly while continuing to use literal values. After upgrading to 1.14, the above example runs and the outputs are stored in the metadata bucket: s3://my-s3-bucket/metadata/propeller/flytesnacks-development-af5xxxkcqzzmnjhv2n4r/n0/data/0/outputs.pb] This feature is enabled by default. If you need to turn it off, set propeller.literalOffloadingConfigEnabled to false in your Helm values.
The role you use to authenticate to your infrastructure provider will need to have read access to the metadata bucket so flytekit can retrieve the offloaded literal.
This feature won’t work if you use Flyte from a Jupyter Notebook or with fast registration (pyflyte run) or launching executions from the console. This is a planned future enhancement.
Breaking BatchSize is removed (#2857) This change affects MapTasks that relied on the PickleTransformer and the BatchSize class to optimize the serial uploading of big lists, It was removed because the feature was not widely used and the asynchronous handling of pickles, introduced in this release, reduces the need for batching. ArrayNode is not experimental anymore (#2900) Considering ArrayNode is the default MapTask since flytekit 1.12, the feature is no longer under flytekit.experimental.arraynode but it should be used as a base import like flytekit.arraynode Full changelog • Fix array node map task for offloaded literal by @pmahindrakar-oss in #2772 • Support default label/annotation for the default launch plan creating from workflow definition by @Mecoli1219 in #2776 • [FlyteClient][FlyteDeck] Get Downloaded Artifact Signed URL via Data Proxy by @Future-Outlier in #2777 • Expose Options in Flytekit for Direct User Access by @Mecoli1219 in #2785 • Adds a simple async utilitiy that managers an async loop in another thread by @thomasjpfan in #2784 • Adds a random DOCSEARCH_API_KEY to get monodocs build to succeed by @thomasjpfan in #2787 • Binary IDL With MessagePack by @Future-Outlier in #2760 • Pickle remote task for Jupyter Notebook Environment by @Mecoli1219 in #2733 • Fix getting started link, remove extra parenthesis by @deepyaman in #2788 • update bigquery plugin reqs by @dansola in <https://github.com/flyteorg/flytekit/pull/279… flyteorg/flytekit