[QNN] Fix LiftConstantScalarOperands to handle aten.pow.Scalar#18994
Conversation
LiftConstantScalarOperands skips aten.pow.Scalar nodes because the op is absent from SCALAR_OPS. After the pass runs, these nodes remain in the graph and the QNN partitioner raises KeyError since no visitor is registered for aten.pow.Scalar. Additionally, _build_tensor_constant crashes with AttributeError: 'float' object has no attribute 'meta' when processing aten.pow.Scalar, because args[0] is a Python float (the scalar base) rather than an fx.Node. Fixes: - Register aten.pow.Scalar -> aten.pow.Tensor_Tensor in SCALAR_OPS, matching the existing aten.pow.Tensor_Scalar entry. The schema (Scalar base, Tensor exponent) correctly maps to aten.pow.Tensor_Tensor(Tensor base, Tensor exponent). - Guard _build_tensor_constant dtype lookup with isinstance(first_arg, fx.Node) so non-Node args (float, int) fall through to the default node.meta["val"].dtype path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18994
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New Failure, 2 Unrelated FailuresAs of commit 0c4b191 with merge base 273888f ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Pull request overview
Fixes a QNN PT2E export failure where LiftConstantScalarOperands missed aten.pow.Scalar, leaving unsupported nodes for the QNN partitioner and potentially crashing when the scalar operand is a Python float.
Changes:
- Register
aten.pow.ScalarinSCALAR_OPS, mapping it toaten.pow.Tensor_Tensor. - Guard
_build_tensor_constantdtype selection so non-fx.Nodeoperands (e.g., Python floats) don’t cause attribute errors.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| aten.sub.Scalar: TensorOpInfo(aten.sub.Tensor, False, False), | ||
| aten.sub.Tensor: TensorOpInfo(aten.sub.Tensor, False, False), | ||
| aten.pow.Tensor_Scalar: TensorOpInfo(aten.pow.Tensor_Tensor, False, False), | ||
| aten.pow.Scalar: TensorOpInfo(aten.pow.Tensor_Tensor, False, False), |
There was a problem hiding this comment.
Add/extend a unit test that exercises aten.pow.Scalar so this regression stays fixed (e.g., export a tiny module using torch.pow(2.0, x), run LiftConstantScalarOperands, and assert the node target becomes aten.pow.Tensor_Tensor and the pass doesn’t crash when args[0] is a Python float).
|
@pytorchbot label "release notes: module:qnn" |
|
Didn't find following labels among repository labels: release notes: backend:qualcomm |
|
@pytorchbot label "release notes: module:qnn" |
|
Didn't find following labels among repository labels: release notes: module:qnn |
|
@pytorchbot label "release notes: module: qnn" |
|
Didn't find following labels among repository labels: release notes: module: qnn |
|
@pytorchbot label "release notes" "module: qnn" |
|
Didn't find following labels among repository labels: release notes |
shewu-quic
left a comment
There was a problem hiding this comment.
Thanks for your contributing.
Could you please help to add a test case for aten.pow.Scalar into test_qnn_delegate.py: TestQNNFloatingPointOperator and test_qnn_delegate.py: TestQNNQuantizedOperator?
Such as this test case
|
Hi @shewu-quic , I have uploaded new test cases. Thanks for the help! |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| PowScalar(9), # base=9, integer exp case # noqa: F405 | ||
| PowScalar(0.5), # base=0.5, fractional case # noqa: F405 |
There was a problem hiding this comment.
The inline comments describe these values as exponent-related (e.g., "integer exp case"), but in PowScalar the varying constructor argument is the scalar base (the exponent is the tensor input). Update the comments to refer to the base to avoid confusion when reading/fixing test failures.
| PowScalar(9), # base=9, integer exp case # noqa: F405 | |
| PowScalar(0.5), # base=0.5, fractional case # noqa: F405 | |
| PowScalar(9), # base=9, integer base case # noqa: F405 | |
| PowScalar(0.5), # base=0.5, fractional base case # noqa: F405 |
| PowScalar(9), # base=9, integer exp case # noqa: F405 | ||
| PowScalar(0.5), # base=0.5, fractional case # noqa: F405 |
There was a problem hiding this comment.
Same as the floating-point test: these comments describe exponent cases, but the varied constructor argument in PowScalar(...) is the scalar base. Please adjust the comments to refer to the base to prevent misunderstanding.
| PowScalar(9), # base=9, integer exp case # noqa: F405 | |
| PowScalar(0.5), # base=0.5, fractional case # noqa: F405 | |
| PowScalar(9), # base=9, integer base case # noqa: F405 | |
| PowScalar(0.5), # base=0.5, fractional base case # noqa: F405 |
|
@abhinaykukkadapu has imported this pull request. If you are a Meta employee, you can view this in D102019844. |
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Fixes #18993
During PT2E quantization export via
to_edge_transform_and_lower_to_qnn()onthe QNN backend, the partitioner raises
KeyErrorwhen encounteringaten.pow.Scalarnodes. The root cause is two-fold:LiftConstantScalarOperandsskipsaten.pow.Scalarbecause the op is absentfrom
SCALAR_OPS. After the pass runs, these nodes remain in the graph andthe QNN partitioner cannot find a corresponding visitor. Additionally,
_build_tensor_constantcrashes withAttributeError: 'float' object has no attribute 'meta'when processingaten.pow.Scalar, becauseargs[0]is aPython float (the scalar base) rather than an
fx.Node.This fix makes
LiftConstantScalarOperandshandleaten.pow.Scalarby:aten.pow.Scalar -> aten.pow.Tensor_TensorinSCALAR_OPS,matching the existing
aten.pow.Tensor_Scalarentry. The schema(Scalar base, Tensor exponent)correctly maps toaten.pow.Tensor_Tensor(Tensor base, Tensor exponent)._build_tensor_constantwithisinstance(first_arg, fx.Node)so non-Node args fall through to thedefault
node.meta["val"].dtypepath instead of crashing.Changed in:
backends/qualcomm/_passes/lift_constant_scalar_operands.pyDiff:
SCALAR_OPS = { ... aten.pow.Tensor_Scalar: TensorOpInfo(aten.pow.Tensor_Tensor, False, False), + aten.pow.Scalar: TensorOpInfo(aten.pow.Tensor_Tensor, False, False), ... } -def _build_tensor_constant(...): - node.args[0].meta["val"].dtype - if not is_float_tensor(node) + first_arg = node.args[0] + first_arg.meta["val"].dtype + if isinstance(first_arg, fx.Node) + and not is_float_tensor(node)Test plan
- [x] Verify
aten.pow.Scalarnodes are converted toaten.pow.Tensor_Tensorafter
transform_for_export_pipelinein the QNN pipelineTest result:
- [x] Unit test
test_qnn_backend_pow_scalarinTestQNNFloatingPointOperatorpasses with x86_64 backend:Covers
PowScalarmodels withbase∈ {2.0, 3.0, 9, 0.5} —integer/square-root exponents where PyTorch and QNN produce matching results.
Test result
Release notes: qualcomm
🤖 Generated with Claude Code
cc @cccclai @cbilgin @abhinaykukkadapu