CVE-2022-29212 – tensorflow
Package
Manager: pip
Name: tensorflow
Vulnerable Version: >=0 <2.6.4 || >=2.7.0 <2.7.2 || >=2.8.0 <2.8.1
Severity
Level: Medium
CVSS v3.1: CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
CVSS v4.0: CVSS:4.0/AV:L/AC:L/AT:N/PR:L/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS: 0.00084 pctl0.25413
Details
Core dump when loading TFLite models with quantization in TensorFlow ### Impact Certain TFLite models that were created using TFLite model converter would crash when loaded in the TFLite interpreter. The culprit is that during quantization the scale of values could be greater than 1 but code was always assuming sub-unit scaling. Thus, since code was calling [`QuantizeMultiplierSmallerThanOneExp`](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/lite/kernels/internal/quantization_util.cc#L114-L123), the `TFLITE_CHECK_LT` assertion would trigger and abort the process. ### Patches We have patched the issue in GitHub commit [a989426ee1346693cc015792f11d715f6944f2b8](https://github.com/tensorflow/tensorflow/commit/a989426ee1346693cc015792f11d715f6944f2b8). The fix will be included in TensorFlow 2.9.0. We will also cherrypick this commit on TensorFlow 2.8.1, TensorFlow 2.7.2, and TensorFlow 2.6.4, as these are also affected and still in supported range. ### For more information Please consult [our security guide](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) for more information regarding the security model and how to contact us with issues and questions. ### Attribution This vulnerability has been reported externally via a [GitHub issue](https://github.com/tensorflow/tensorflow/issues/43661).
Metadata
Created: 2022-05-24T22:16:08Z
Modified: 2022-06-06T18:00:44Z
Source: https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2022/05/GHSA-8wwm-6264-x792/GHSA-8wwm-6264-x792.json
CWE IDs: ["CWE-20"]
Alternative ID: GHSA-8wwm-6264-x792
Finding: F184
Auto approve: 1