acoustic-carpenter-78188
04/07/2023, 2:11 AMTensorFlow 2.11.1
Release 2.11.1
Note: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.
• Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself steps. You can refer to the release notes of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create a GitHub issue to let us know.
This release also introduces several vulnerability fixes:
• Fixes an FPE in TFLite in conv kernel CVE-2023-27579
• Fixes a double free in Fractional(Max/Avg)Pool CVE-2023-25801
• Fixes a null dereference on ParallelConcat with XLA CVE-2023-25676
• Fixes a segfault in Bincount with XLA CVE-2023-25675
• Fixes an NPE in RandomShuffle with XLA enable CVE-2023-25674
• Fixes an FPE in TensorListSplit with XLA CVE-2023-25673
• Fixes segmentation fault in tfg-translate CVE-2023-25671
• Fixes an NPE in QuantizedMatMulWithBiasAndDequantize CVE-2023-25670
• Fixes an FPE in AvgPoolGrad with XLA CVE-2023-25669
• Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation CVE-2023-25668
• Fixes a segfault when opening multiframe gif CVE-2023-25667
• Fixes an NPE in SparseSparseMaximum CVE-2023-25665
• Fixes an FPE in AudioSpectrogram CVE-2023-25666
• Fixes a heap-buffer-overflow in AvgPoolGrad CVE-2023-25664
• Fixes a NPE in TensorArrayConcatV2 CVE-2023-25663
• Fixes a Integer overflow in EditDistance CVE-2023-25662
• Fixes a Seg fault inCVE-2023-25660tf.raw_ops.Print
• Fixes a OOB read in DynamicStitch CVE-2023-25659
• Fixes a OOB Read in GRUBlockCellGrad CVE-2023-25658
TensorFlow 2.11.0
Release 2.11.0
Breaking Changes
• Thebase class now points to the new Keras optimizer, while the old optimizers have been moved to thetf.keras.optimizers.Optimizer
namespace.tf.keras.optimizers.legacy
If you find your workflow failing due to this change, you may be facing one of the following issues:
• Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to(e.g.<http://tf.keras.optimizer.legacy.XXX|tf.keras.optimizer.legacy.XXX>
).tf.keras.optimizer.legacy.Adam
• TF1 compatibility. The new optimizer,, does not support TF1 any more, so please use the legacy optimizertf.keras.optimizers.Optimizer
. We highly recommend migrating your workflow to TF2 for stable support and new features.<http://tf.keras.optimizer.legacy.XXX|tf.keras.optimizer.legacy.XXX>
• Old optimizer API not found. The new optimizer,, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.tf.keras.optimizers.Optimizer
• Learning rate schedule access. When using a, the new optimizer'stf.keras.optimizers.schedules.LearningRateSchedule
property returns the current learning rate value instead of alearning_rate
object as before. If you need to access theLearningRateSchedule
object, please useLearningRateSchedule
.optimizer._learning_rate
• If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.<http://tf.keras.optimizer.legacy.XXX|tf.keras.optimizer.legacy.XXX>
• Errors, such as. The new optimizer requires all optimizer variables to be created at the firstCannot recognize variable...
orapply_gradients()
call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please callminimize()
before the training loop.optimizer.build(model.trainable_variables)
• Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.
The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,) will only be implemented based on the newtf.keras.optimizers.Adafactor
base class.tf.keras.optimizers.Optimizer
•code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import oftensorflow/python/keras
and use the public API withtensorflow.python.keras
orfrom tensorflow import keras
.import tensorflow as tf; tf.keras
Major Features and Improvements... (truncated) Changelog Sourced from tensorflow's changelog.
Release 2.11.1
Note: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.
• Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself steps. You can refer to the release notes of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create …flyteorg/flytekit ✅ All checks have passed 30/30 successful checks
acoustic-carpenter-78188
04/19/2023, 9:42 PM