Skip to content

Bug fixes with variable handling in LossScaleOptimizer.#21706

Merged
hertschuh merged 1 commit intokeras-team:masterfrom
hertschuh:loss_scale_optimizer
Oct 2, 2025
Merged

Bug fixes with variable handling in LossScaleOptimizer.#21706
hertschuh merged 1 commit intokeras-team:masterfrom
hertschuh:loss_scale_optimizer

Conversation

@hertschuh
Copy link
Copy Markdown
Collaborator

@hertschuh hertschuh commented Oct 1, 2025

overwrite_with_gradient

overwrite_with_gradient would be ineffective on JAX in real-world conditions, i.e. within model.fit.

This is because in the training loop, stateless_apply is passed trainable_variables as arrays containing the values of the trainable variables, not the variables themselves. Instead, we have to inspect the variables.

apply with gradients only

apply(grads) without the trainable_variables argument passed in would not apply anything.

This is because the code uses self._trainable_variables. But this was an empty array for LossScaleOptimizer. This was fixed by adding super().build(...).

unsupported optimizer arguments

Also fail when other arguments from the base optimizer are passed to LossScaleOptimizer.__init__ since they are not actually supported. They are also no longer returned by get_config.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @hertschuh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses critical bugs within the LossScaleOptimizer concerning its handling of variables, particularly in scenarios involving JAX and when trainable_variables are not explicitly supplied to the apply method. The changes ensure that the optimizer correctly identifies and manipulates the variables managed by its inner optimizer, thereby enhancing the robustness and accuracy of mixed-precision training workflows.

Highlights

  • LossScaleOptimizer Variable Handling Fix: Corrected LossScaleOptimizer to properly reference trainable_variables from its inner_optimizer when applying gradients. This resolves issues where overwrite_with_gradient was ineffective on JAX and apply(grads) failed without explicit trainable_variables.
  • stateless_apply Argument Clarification: Updated the stateless_apply method's docstring in BaseOptimizer to explicitly state that optimizer_variables and trainable_variables arguments should be native tensors (values) rather than keras.Variable objects, improving clarity and correctness for stateless operations.
  • Enhanced Test Coverage: Introduced a new test case (test_apply_with_no_vars) to validate the LossScaleOptimizer.apply method when trainable_variables are not explicitly provided. Additionally, existing tests were updated to correctly pass variable values to stateless_apply, aligning with the clarified argument types.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces important bug fixes for variable handling within the LossScaleOptimizer, particularly for JAX and stateless execution scenarios. The core logic changes correctly reference the inner optimizer's variables, resolving issues where overwrite_with_gradient was ineffective and apply(grads) would fail. The accompanying test updates and the new test case are valuable additions that improve test correctness and coverage. My review focuses on a couple of points: improving variable naming in tests to avoid shadowing Python built-ins, and highlighting the use of private attributes, which goes against the provided style guide's principle of encapsulation. Overall, the functional changes are solid.

Comment on lines +38 to +43
vars = [backend.Variable([1.0, 2.0, 3.0, 4.0])]
optimizer.build(vars)
optimizer.apply(grads)
self.assertAllClose(
vars, [[0.5, -1.0, -0.5, 3.0]], rtol=1e-4, atol=1e-4
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable name vars shadows the built-in Python function vars(). It's a good practice to avoid this to prevent confusion and potential bugs. Consider renaming it to something more descriptive like model_variables or trainable_variables to improve clarity.1

Suggested change
vars = [backend.Variable([1.0, 2.0, 3.0, 4.0])]
optimizer.build(vars)
optimizer.apply(grads)
self.assertAllClose(
vars, [[0.5, -1.0, -0.5, 3.0]], rtol=1e-4, atol=1e-4
)
model_variables = [backend.Variable([1.0, 2.0, 3.0, 4.0])]
optimizer.build(model_variables)
optimizer.apply(grads)
self.assertAllClose(
model_variables, [[0.5, -1.0, -0.5, 3.0]], rtol=1e-4, atol=1e-4
)

Style Guide References

Footnotes

  1. Avoid overly generic names and follow Python naming conventions. Shadowing built-in functions is discouraged.

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Oct 1, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.60%. Comparing base (5ae5503) to head (3c41697).
⚠️ Report is 413 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #21706   +/-   ##
=======================================
  Coverage   82.59%   82.60%           
=======================================
  Files         572      572           
  Lines       58327    58328    +1     
  Branches     9131     9134    +3     
=======================================
+ Hits        48177    48181    +4     
+ Misses       7818     7817    -1     
+ Partials     2332     2330    -2     
Flag Coverage Δ
keras 82.40% <100.00%> (+<0.01%) ⬆️
keras-jax 63.31% <100.00%> (+0.01%) ⬆️
keras-numpy 57.66% <100.00%> (+0.01%) ⬆️
keras-openvino 34.31% <0.00%> (-0.01%) ⬇️
keras-tensorflow 64.06% <100.00%> (+0.01%) ⬆️
keras-torch 63.64% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@hertschuh
Copy link
Copy Markdown
Collaborator Author

@cantonios can you review?

optimizer.build(vars)
vars, _ = optimizer.stateless_apply(
optimizer.variables, grads, vars
[v.value for v in optimizer.variables],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is because of that JAX deprecation issue?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How prevalent is this pattern across keras? If we can no longer rely on the __jax_array__ built-in to auto convert, maybe there should be a helper function like optimizer.variable_values that unwraps the variables?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I uncovered this as part of the __jax_array__ deprecation. See #21702

However, I discovered in the process that we're testing stateless_apply the wrong way. stateless_apply takes tensors, not variables (unlike the names suggest). So this change of the unit tests is correct, and is independent of the __jax_array__ deprecation.

This pattern of unwrapping variables was only added in this test file (see #21702 ). This pattern does exists to some extend, but in the JAX trainer, not the optimizers.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this all looks better.

`overwrite_with_gradient` would be ineffective on JAX in real-world conditions, i.e. within `model.fit`.

This is because in the training loop, `stateless_apply` is passed `trainable_variables` as arrays containing the values of the trainable variables, not the variables themselves. Instead, we have to inspect the variables.

`apply(grads)` without the `trainable_variables` argument passed in would not apply anything.

This is because the code uses `self._trainable_variables`. But this was an empty array for `LossScaleOptimizer`. This was fixed by adding `super().build(...)`.

Also fail when other arguments from the base optimizer are passed to `LossScaleOptimizer.__init__` since they are not actually supported. They are also no longer returned by `get_config`.
@hertschuh hertschuh force-pushed the loss_scale_optimizer branch from cd630c6 to 3c41697 Compare October 1, 2025 23:26
@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Oct 2, 2025
@hertschuh hertschuh merged commit f279e93 into keras-team:master Oct 2, 2025
12 checks passed
@hertschuh hertschuh deleted the loss_scale_optimizer branch October 2, 2025 19:02
jerryxyj added a commit to jerryxyj/keras that referenced this pull request Feb 14, 2026
* Implement logaddexp2 function in keras.ops (keras-team#21691)

* [Keras 3 OpenVINO Backend]: Support numpy.sort (keras-team#21687)

* [Keras 3 OpenVINO Backend]: Support numpy.median operation (keras-team#21667)

* Fix deadlock in `CallbackList`. (keras-team#21701)

* [OpenVINO backend] solve randomuniform issue (keras-team#21670)

* Bug fixes with variable handling in `LossScaleOptimizer`. (keras-team#21706)

* Do not use backend ops in `ProgBar`. (keras-team#21709)

* Fix the Doc of the combination relation in func `keras.layers.Normali…

* Remove reliance on `__jax_array__` to unwrap variables. (keras-team#21719)

* Bump the github-actions group with 6 updates (keras-team#21705)

* Add linspace and logspace implementations in OpenVINO NumPy backend (…

* Add jvp op (keras-team#21720)

* Add unfold op (keras-team#21685)

* Add the description that `0` should not in the arg `axes` in `keras.l…

* Add daily Python 3.13 CPU-only tests to nightly workflow (keras-team#21566)

* Fix histogram op for symbolic inputs (keras-team#21729)

* Relax tolerance for svd test (keras-team#21731)

* Use jax.enable_x64 in place of jax.experimental.disable_x64 (keras-team#21734)

* Refactor variable serialization. (keras-team#21713)

* Ensure keras.ops.eye behavior is consistent across backends. (keras-team#21738)

* Add `eye` support for OpenVINO backend (keras-team#21739)

* Update Torch and Tensorflow versions in cuda requirements files. (keras-team#21…

* Implement isreal function in keras.ops (keras-team#21740)

* Remove the unused jax `enable_x64`. (keras-team#21737)

* Correct implementation for several OpenVINO operations (keras-team#21746)

* Sets `is_gptq_calibrated` flag when deserializing GPTQ models (keras-team#21748)

* Correct implementation for several OpenVINO operations (keras-team#21752)

* Fix the Bug in func `preprocess_input` when `x` in 3D and `data_forma…

* Update Torch to 2.9.0 on GPU. (keras-team#21756)

* `StringLookup` & `IntegerLookup` now save vocabulary loaded from file…

* Implement trapezoid function in keras.ops (keras-team#21757)

* Upstream `ReversibleEmbedding` from KerasHub. (keras-team#21753)

* Raise exception on batch_size mismatch for stateful RNNs (keras-team#21742)

* Propose a method for handling datasets which doesn't explicitly requi…

* Use `filter="data"` option of `TarFile.extractall`. (keras-team#21760)

* Add Distillation API to Keras (keras-team#21572)

* removes unnecessary try-catch blocks and guard conditions (keras-team#21767)

* cleanup distillation  loss names (keras-team#21766)

* Document that `set_backend` requires re-importing keras. (keras-team#21764)

* Fix discretization discrepancy (keras-team#21769)

* fix sas metrics in jax `fit` (keras-team#21765)

* Support for extracting volume patches (keras-team#21759)

* Fix negative index handling in MultiHeadAttention attention_axes (keras-team#21…

* Make confusion metrics compilable. (keras-team#21775)

* Suport keras.op.view() to view the same data bitwise at a new dtype  …

* Fix: `keras.ops.quantile` works with tf graph execution (keras-team#21782)

* Fix typo in Distiller docstring

* Add warning to `set_backend` and more detailed example. (keras-team#21787)

* Don't fail `Variable.__repr__` if the value cannot be retrieved. (keras-team#21…

* Update Keras backend installation instructions

* Fix: Support 'jpg' format in keras.utils.save_img() (keras-team#21683)

* Fix tf dataset detection logic. (keras-team#21794)

* update test after jax.config.jax_vjp3 is enabled (keras-team#21776)

* Add keras.ops.array_split for Tensor Parallelism Support (keras-team#21697)

* Adding get_device_count function to the distribution_lib (keras-team#21791)

* Fix: use raw string for CALIBRATION_TEXT (keras-team#21790)

* Add backend compatibility table to documentation (keras-team#21733)

* More OpenVINO Operations (keras-team#21774)

* Support scalar view for tf backend. (keras-team#21802)

* Address bug with convolution using Tensorflow, Numpy, Jax backends (#…

* Fix bug with correlate for tensorflow (keras-team#21778)

* Pass optional field in a few places to fix None input error. (keras-team#21818)

* Fix(backend/torch): Resolve MPS broadcast crash in binary_crossentrop…

* Fix broken example indentation in Keras io (keras-team#21807)

* Add missing `convert_to_tensor` to `take_along_axis` on JAX. (keras-team#21825)

* Added  numpy.digitize support for OPENVINO backend  (keras-team#21824)

* Bump the github-actions group with 4 updates (keras-team#21809)

* Fix typo in CONTRIBUTING.md (keras-team#21812)

* Fix `Progbar.update` when receiving list, np arrays, and tensors. (#2…

* Fix CosineDecay documentation to clarify alpha is a multiplier (keras-team#21827)

* Fix noise_shape validation in keras.layers.Dropout (keras-team#21819)

* Fix typos in some files (keras-team#21830)

* Fix failing sklearn tests following release of pytest 9.0. (keras-team#21843)

* Implement empty_like function in keras.ops (keras-team#21840)

* Run tests on TPU (keras-team#21425)

* Fix typo in variable name 'embeding' to 'embedding' (keras-team#21845)

* Fix name_scope_stack AttributeError and IndexError in __exit__ (keras-team#21834)

* Update keras3 Softmax mask handling to be more numerically robust. (#…

* Support jax2tf in JaxLayer for tf backend (keras-team#21842)

* Fix assigning a value to a variable within an autocast scope. (keras-team#21864)

* Add note about label noise in CIFAR-10 dataset documentation (keras-team#21855)

* Allow None inputs in `Layer.build`. (keras-team#21866)

* `standardize_shape` normalizes the dimensions and tuple. (keras-team#21867)

* Improve error message when layer/model input validation fails. (keras-team#21869)

* Add verbose logging when ModelCheckpoint callback is done saving ... …

* [OpenVINO backend] Remove deprecated openvino.runtime import (keras-team#21826)

* Fix Torch output_padding constraint for ConvTranspose layers (keras-team#21852)

* Support PyDataset in Normalization layer `adapt` methods (keras-team#21817)

* Fix test failures when nnx is enabled (keras-team#21875)

* Implement ldexp function in keras.ops (keras-team#21863)

* Added OrbaxCheckpoint for keras 3.0 for Data centric saving and resto…

* Add raise_error option to TerminateOnNaN for immediate termination on…

* Fix NNX tests (keras-team#21884)

* `keras.utils.set_random_seed` clear the global `SeedGenerator`. (keras-team#21874)

* fix tpu test (keras-team#21893)

* Model Export to liteRT (keras-team#21674)

* Fix: torch layer losses keyword arguments in rematscope (keras-team#21865)

* Add label to trigger TPU tests manually. (keras-team#21897)

* Support tpu tests allowing tpu precision for matmul (keras-team#21887)

* remove log (keras-team#21901)

* Introduces layer filtering for quantization and fixes GPTQ dependency…

* Replace `np.reshape(x, newshape=y)` with `np.reshape(x, y)`. (keras-team#21899)

* Modified Dense layer documentation for use_bias with batch normalizat…

* [OpenVINO Backend] Support np.diag (keras-team#20967)

* Modify Muon optimizer (keras-team#21885)

* Disables implicit GPTQ quantization using dtype_policy setter (keras-team#21895)

* Dense: validate units argument (keras-team#21902)

* Pin `ai-edge-litert` version to fix CI (keras-team#21912)

* Increase JAX GPU tests timeout to 2 hours (keras-team#21915)

* Fix TPU tests - for splash attention (keras-team#21891)

* Support various filtering functions in OpenVINO (keras-team#21836)

* OpenVINO NN Module Functions (keras-team#21803)

* fix XLA dynamic shape output of ops.diag (keras-team#21906)

* Fix: Remove redundant epsilon in loss mask weight calculation (keras-team#21908)

* Implement vander function in keras.ops (keras-team#21882)

* Fix Muon optimizer with TensorFlow backend. (keras-team#21924)

* OpenVino `device_scope` and data adapters tests (keras-team#21922)

* Fix fake quant gradient output shape and use `jax.grad` for tests. (#…

* Introduces QuantizationConfig for fine-grained quantization control (…

* Extended fix OOM Issue keras-team#21634 on Keras side (keras-team#21755)

* Fix ops.tile shape inference issue on TensorFlow backend (keras-team#21860)

* Add adaptive pooling (1D, 2D, 3D) support across JAX, NumPy, TensorFl…

* More OpenVINO Numpy Operations (keras-team#21925)

* Adds Serialization Support for QuantizationConfig based quantized mod…

* Refactors AbsMaxQuantizer to accept axis in __call__ (keras-team#21931)

* Speed up unit tests on JAX and TensorFlow. (keras-team#21933)

* update dev version number (keras-team#21921)

* Always use `run_tpu_tests` label to run the TPU tests. (keras-team#21900)

* Revert "Always use `run_tpu_tests` label to run the TPU tests. (keras-team#2190…

* Forward-fix for JAX API changes (keras-team#21938)

* Remove nightly tests with Python 3.13. (keras-team#21943)

* Do no always make batch size dynamic during export. (keras-team#21944)

* Fix `numpy.mean` with dynamic shape on OpenVino. (keras-team#21947)

* Remove NumPy warning with NumPy >= 2. (keras-team#21949)

* Always use `run_tpu_tests` label to run the TPU tests. (keras-team#21950)

* [OpenVINO backend] Support np.vander, np.trapezoid, np.corrcoef, np.c…

* Fixed a bug in _keras_mask (keras-team#21946)

* Fix handling of symbolic Tensor in RNN (keras-team#21945)

* Add example for arctanh (keras-team#21951)

* Fix DoS via malicious HDF5 dataset metadata in KerasFileEditor (keras-team#21880)

* Implement nextafter function in keras.ops (keras-team#21960)

* fix image.extract_patches strides handling (keras-team#21959)

* [OpenVINO backend] Support numpy.flip (keras-team#21963)

* Bump the github-actions group with 4 updates (keras-team#21968)

* Fix CUDNN flash attention for JAX > 0.6.2. (keras-team#21970)

* Skip `PyDataset` tests on TPU. (keras-team#21964)

* Add missing `name` to `SeedGenerator.get_config`. (keras-team#21975)

* Use `subprocess.run` in `pip_build.py` to escape wheel path. (keras-team#21976)

* Update dependencies and `dependabot.yml`. (keras-team#21974)

* Use `kokoro:force-run` label for TPU tests too. (keras-team#21956)

* Add simple example for keras.layers.Resizing (keras-team#21966)

* [OpenVINO backend] Support numpy.diagonal (keras-team#21965)

* Bump actions/checkout from 5.0.1 to 6.0.1 in the github-actions group…

* Fix ReversibleEmbedding mask error when using reverse=True (keras-team#21961)

* Update feature_space.py (keras-team#21935)

* Clarify Tracker docstring wording (keras-team#21985)

* Remove semi-colon after email in SECURITY.md (keras-team#21993)

* Implement cbrt function for OpenVINO backend (keras-team#21987)

* Fix config keys for chain depth and num chains (keras-team#21979)

* Implement hypot and trace function for OpenVINO backend (keras-team#21991)

* Implement ptp function in keras.ops (keras-team#21990)

* Orbax Loading and Sharding Support feature (keras-team#21903)

* Add usage examples to loss docstrings (keras-team#21989)

* Unify extract_patches to support both 2D and 3D patches (keras-team#21980)

* Fix ndim to support tf.RaggedTensor by using shape.rank (keras-team#21999)

* Implement size and swapaxes function for OpenVINO backend.  (keras-team#21995)

* Implement kron function for OpenVINO backend (keras-team#22000)

* Adds support for AWQ (keras-team#21992)

* Trigger TPU tests on kokoro label removal rather than addition. (keras-team#22001)

* Document complex dtype limitation in ops.correlate (keras-team#21984)

* [OpenVINO backend] Fix and enable numpy.rot90 (keras-team#21967)

* Only skip TPU excluded tests on TPU. (keras-team#22008)

* Improvements to `JaxLayer` and `FlaxLayer` related to RNG handling an…

* Fix typo in contrast adjustment method (keras-team#22012)

* Fix typo and improve docstring formatting (keras-team#22017)

* Implement nansum function in keras.ops (keras-team#21996)

* Fix unreliable Orbax checkpoint detection with custom implementation …

* Unpin as many Python packages versions as possible. (keras-team#22023)

* Allow `CenterCrop` layer to handle dynamic image sizes. (keras-team#22020)

* TPU tests now verify that we can detect TPUs and fails it not. (keras-team#22019)

* Refactor ExtractPatches to handle both 2D and 3D (keras-team#22013)

* Implement  argpartition function for OpenVINO backend (keras-team#22025)

* Implement logaddexp2 function for OpenVINO backend (keras-team#22026)

* Implement nanmin function in keras.ops (keras-team#22040)

* Increase test coverage for IntegerLookup layer (keras-team#22022)

* feat: Add documentation examples for image preprocessing augmentation…

* Fix: activity regularizer not normalized by batch size (keras-team#22021)

* Implement ldexp and select ops for OpenVINO backend (keras-team#22042)

* Fix: convert deque to list before tf.transpose in keras.ops.quantile …

* Fix timedistributed mask validation (keras-team#22039)

* Torch backend: allow explicit device selection and guard DirectML usa…

* Implement nanmax function in keras.ops (keras-team#22043)

* Add bias support for torch's `dot_product_attention`. (keras-team#22045)

* Fix incorrect example in `ops.associative_scan` docstring (keras-team#22051)

* Add Batch Renormalisation (keras-team#22047)

* Implement round and divide_no_nan ops for OpenVINO backend (keras-team#22052)

* Add dynamic shape support for torch backend export (keras-team#22041)

* Implement vstack func for OpenVINO backend (keras-team#22059)

* Implement ptp function for OpenVINO backend (keras-team#22060)

* Implement nanmean function in keras.ops (keras-team#22055)

* Do not allow external links in HDF5 files. (keras-team#22057)

* Fix discretization symbolic one hot (keras-team#22048)

* Implement complete Keras-Orbax checkpoint integration (keras-team#22002)

* Increase test coverage for StringLookup preprocessing layer (keras-team#22056)

* Set mutable to True by default in nnx_metadata (keras-team#22074)

* Adds Asymmetric INT4 Sub-Channel Quantization Support (keras-team#22007)

* Allow passing variables to a function with `@custom_gradient`. (keras-team#22069)

* Disallow TFSMLayer deserialization in safe_mode to prevent external S…

* Remove redundant global seed initialization code. (keras-team#22084)

* Add `Muon` to the list of all optimizer classes. (keras-team#22083)

* Implement tile function for openvino backend (keras-team#22071)

* implement nansum ops for openvino backend (keras-team#22078)

* Remove `testing.uses_cpu()` and re-implement for JAX. (keras-team#22087)

* benchmarks: add RandomRotation tf.data performance benchmark (keras-team#21986)

* Fix arctan2 NaN propagation in OpenVINO backend (keras-team#22064)

* Validate positive height and width in image resize (keras-team#22079)

* Don't skip some JAX linalg tests on JAX. (keras-team#22091)

* Implement nanprod function in keras.ops (keras-team#22089)

* Increase test coverage for TextVectorization layer (keras-team#22066)

* Bump the github-actions group with 2 updates (keras-team#22093)

* fix: pytorch onnx export symbolic test (keras-team#22086)

* Improvements to `*_uses_gpu` and `*_uses_tpu`. (keras-team#22088)

* Implement cross product operation for OpenVINO backend (keras-team#22096)

* Fail fast on invalid convolution output shapes during symbolic execut…

* Fix Normalization broadcasting for scalar and multidim mean and varia…

* Standardize the way tests are skipped based on backend and accelerato…

* Don't call `pythonify_logs` within `get_metrics_result`. (keras-team#22107)

* Fix gaussian_blur padding calculation for even kernel sizes (keras-team#22054)

* Adjust JAX variable initializer jitting criteria. (keras-team#22116)

* Exclude conv transpose tests on TPU. (keras-team#22117)

* Remove incorrect but dead code in `BaseOptimizer.stateless_apply`. (#…

* Implement tensordot operation for OpenVINO backend (keras-team#22098)

* Fix bounding box docstring references (keras-team#22110)

* feat: add depth_to_space and space_to_depth ops (keras-team#22112)

* Fix sparse reshape test with Numpy 2.4. (keras-team#22141)

* Fix vocabulary reload corruption caused by trailing newline handling …

* Add support for dynamic dimensions in `ops.slice.compute_output_spec`…

* Revamp graph validation in `Function.__init__`. (keras-team#22153)

* Fix: draw_bounding_boxes float32 to uint8 conversion (keras-team#22129)

* Implement dstack function across all backends (keras-team#22120)

* Add exp2 operation to OpenVINO backend (keras-team#22131)

* Add trunc operation to OpenVINO backend (keras-team#22134)

* Fix: add missing validation for output padding < strides (keras-team#22130)

* docs: Add guide on resuming training from weight-only checkpoints (#2…

* feat(openvino): upgrade opset to opset15 (keras-team#22159)

* Fix order-dependent float16/bfloat16 promotion in cast_to_common_dtyp…

* Fix TrackedDict constructor to support iterable (key, value) inputs (…

* Implement numpy.gcd using Euclidean algorithm for OpenVINO backend (#…

* [Keras 3] Refactor ExportArchive to be a dispatcher for different exp…

* [Keras 3] Refactor ExportArchive to be a dispatcher for different exp…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready to pull Ready to be merged into the codebase size:M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants