<#1586 Bump tensorflow from 2.10.0 to 2.11.1 in /p...
# flyte-github
a
#1586 Bump tensorflow from 2.10.0 to 2.11.1 in /plugins/flytekit-mlflow Pull request opened by dependabot[bot] Bumps tensorflow from 2.10.0 to 2.11.1. Release notes Sourced from tensorflow's releases.
TensorFlow 2.11.1
Release 2.11.1
Note: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.
• Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself steps. You can refer to the release notes of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create a GitHub issue to let us know.
This release also introduces several vulnerability fixes:
• Fixes an FPE in TFLite in conv kernel CVE-2023-27579
• Fixes a double free in Fractional(Max/Avg)Pool CVE-2023-25801
• Fixes a null dereference on ParallelConcat with XLA CVE-2023-25676
• Fixes a segfault in Bincount with XLA CVE-2023-25675
• Fixes an NPE in RandomShuffle with XLA enable CVE-2023-25674
• Fixes an FPE in TensorListSplit with XLA CVE-2023-25673
• Fixes segmentation fault in tfg-translate CVE-2023-25671
• Fixes an NPE in QuantizedMatMulWithBiasAndDequantize CVE-2023-25670
• Fixes an FPE in AvgPoolGrad with XLA CVE-2023-25669
• Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation CVE-2023-25668
• Fixes a segfault when opening multiframe gif CVE-2023-25667
• Fixes an NPE in SparseSparseMaximum CVE-2023-25665
• Fixes an FPE in AudioSpectrogram CVE-2023-25666
• Fixes a heap-buffer-overflow in AvgPoolGrad CVE-2023-25664
• Fixes a NPE in TensorArrayConcatV2 CVE-2023-25663
• Fixes a Integer overflow in EditDistance CVE-2023-25662
• Fixes a Seg fault in
tf.raw_ops.Print
CVE-2023-25660
• Fixes a OOB read in DynamicStitch CVE-2023-25659
• Fixes a OOB Read in GRUBlockCellGrad CVE-2023-25658
TensorFlow 2.11.0
Release 2.11.0
Breaking Changes
• The
tf.keras.optimizers.Optimizer
base class now points to the new Keras optimizer, while the old optimizers have been moved to the
tf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:
Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to
<http://tf.keras.optimizer.legacy.XXX|tf.keras.optimizer.legacy.XXX>
(e.g.
tf.keras.optimizer.legacy.Adam
).
TF1 compatibility. The new optimizer,
tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizer
<http://tf.keras.optimizer.legacy.XXX|tf.keras.optimizer.legacy.XXX>
. We highly recommend migrating your workflow to TF2 for stable support and new features.
Old optimizer API not found. The new optimizer,
tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
Learning rate schedule access. When using a
tf.keras.optimizers.schedules.LearningRateSchedule
, the new optimizer's
learning_rate
property returns the current learning rate value instead of a
LearningRateSchedule
object as before. If you need to access the
LearningRateSchedule
object, please use
optimizer._learning_rate
.
If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass
<http://tf.keras.optimizer.legacy.XXX|tf.keras.optimizer.legacy.XXX>
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
Errors, such as
Cannot recognize variable...
.
The new optimizer requires all optimizer variables to be created at the first
apply_gradients()
or
minimize()
call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call
optimizer.build(model.trainable_variables)
before the training loop.
Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.
The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
tf.keras.optimizers.Adafactor
) will only be implemented based on the new
tf.keras.optimizers.Optimizer
base class.
tensorflow/python/keras
code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of
tensorflow.python.keras
and use the public API with
from tensorflow import keras
or
import tensorflow as tf; tf.keras
.
Major Features and Improvements
... (truncated) Changelog Sourced from tensorflow's changelog.
Release 2.11.1
Note: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.
• Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself steps. You can refer to the release notes of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create …
flyteorg/flytekit All checks have passed 30/30 successful checks