Skip to content

[QNN] Fix LiftConstantScalarOperands to handle aten.pow.Scalar#18994

Merged
abhinaykukkadapu merged 4 commits intopytorch:mainfrom
KevinUW114514:fix/qnn-lift-constant-scalar-operands-pow
Apr 22, 2026
Merged

[QNN] Fix LiftConstantScalarOperands to handle aten.pow.Scalar#18994
abhinaykukkadapu merged 4 commits intopytorch:mainfrom
KevinUW114514:fix/qnn-lift-constant-scalar-operands-pow

Conversation

@KevinUW114514
Copy link
Copy Markdown
Contributor

@KevinUW114514 KevinUW114514 commented Apr 20, 2026

Fixes #18993

During PT2E quantization export via to_edge_transform_and_lower_to_qnn() on
the QNN backend, the partitioner raises KeyError when encountering
aten.pow.Scalar nodes. The root cause is two-fold:

LiftConstantScalarOperands skips aten.pow.Scalar because the op is absent
from SCALAR_OPS. After the pass runs, these nodes remain in the graph and
the QNN partitioner cannot find a corresponding visitor. Additionally,
_build_tensor_constant crashes with AttributeError: 'float' object has no attribute 'meta' when processing aten.pow.Scalar, because args[0] is a
Python float (the scalar base) rather than an fx.Node.

This fix makes LiftConstantScalarOperands handle aten.pow.Scalar by:

  1. Registering aten.pow.Scalar -> aten.pow.Tensor_Tensor in SCALAR_OPS,
    matching the existing aten.pow.Tensor_Scalar entry. The schema
    (Scalar base, Tensor exponent) correctly maps to
    aten.pow.Tensor_Tensor(Tensor base, Tensor exponent).
  2. Guarding the dtype lookup in _build_tensor_constant with
    isinstance(first_arg, fx.Node) so non-Node args fall through to the
    default node.meta["val"].dtype path instead of crashing.

Changed in: backends/qualcomm/_passes/lift_constant_scalar_operands.py

Diff:

 SCALAR_OPS = {
     ...
     aten.pow.Tensor_Scalar: TensorOpInfo(aten.pow.Tensor_Tensor, False, False),
+    aten.pow.Scalar: TensorOpInfo(aten.pow.Tensor_Tensor, False, False),
     ...
 }

-def _build_tensor_constant(...):
-    node.args[0].meta["val"].dtype
-    if not is_float_tensor(node)
+    first_arg = node.args[0]
+    first_arg.meta["val"].dtype
+    if isinstance(first_arg, fx.Node)
+    and not is_float_tensor(node)

Test plan

- [x] Verify aten.pow.Scalar nodes are converted to aten.pow.Tensor_Tensor

after transform_for_export_pipeline in the QNN pipeline

Test result:

  • Debug log before:
  Quantizing(PTQ) the model...

  === [VERIFY] Pow nodes in quantized_model.graph ===
    aten.pow.Scalar  args: ['float', 'Node']
      schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> Tensor
    aten.pow.Scalar  args: ['float', 'Node']
      schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> Tensor
    aten.pow.Scalar  args: ['float', 'Node']
      schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> Tensor
    aten.pow.Scalar  args: ['float', 'Node']
      schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> Tensor
    aten.pow.Scalar  args: ['float', 'Node']
      schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> Tensor
    aten.pow.Scalar  args: ['float', 'Node']
      schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> Tensor
    aten.pow.Scalar  args: ['float', 'Node']
      schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> Tensor
    aten.pow.Scalar  args: ['float', 'Node']
      schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> Tensor
  === [VERIFY] Done. Pausing for inspection... ===
  Press Enter to continue...
  === [VERIFY] Before transform_for_export_pipeline (forward) ===
    BEFORE: aten.pow.Scalar
    BEFORE: aten.pow.Scalar
    BEFORE: aten.pow.Scalar
    BEFORE: aten.pow.Scalar
    BEFORE: aten.pow.Scalar
    BEFORE: aten.pow.Scalar
    BEFORE: aten.pow.Scalar
    BEFORE: aten.pow.Scalar
  === [VERIFY] Before transform done ===
  Press Enter to continue...[WARNING] the 0 th arg of node arange is NumberType, might need to lift
  [WARNING] the 1 th arg of node arange is NumberType, might need to lift
  [WARNING] the 0 th arg of node pow_1 is NumberType, might need to lift
  [WARNING] the 0 th arg of node pow_2 is NumberType, might need to lift
  [WARNING] the 0 th arg of node pow_3 is NumberType, might need to lift
  [WARNING] the 0 th arg of node pow_4 is NumberType, might need to lift
  [WARNING] the 0 th arg of node pow_5 is NumberType, might need to lift
  [WARNING] the 0 th arg of node pow_6 is NumberType, might need to lift
  [WARNING] the 0 th arg of node pow_7 is NumberType, might need to lift
  [WARNING] the 0 th arg of node pow_8 is NumberType, might need to lift

  === [VERIFY] After transform_for_export_pipeline (forward) ===
    AFTER: aten.pow.Scalar
    AFTER: aten.pow.Scalar
    AFTER: aten.pow.Scalar
    AFTER: aten.pow.Scalar
    AFTER: aten.pow.Scalar
    AFTER: aten.pow.Scalar
    AFTER: aten.pow.Scalar
    AFTER: aten.pow.Scalar
  === [VERIFY] After transform done ===
  • Debug log after:
Quantizing(PTQ) the model...

=== [VERIFY] Pow nodes in quantized_model.graph ===
  aten.pow.Tensor_Tensor  args: ['Node', 'Node']
    schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor
  aten.pow.Tensor_Tensor  args: ['Node', 'Node']
    schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor
  aten.pow.Tensor_Tensor  args: ['Node', 'Node']
    schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor
  aten.pow.Tensor_Tensor  args: ['Node', 'Node']
    schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor
  aten.pow.Tensor_Tensor  args: ['Node', 'Node']
    schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor
  aten.pow.Tensor_Tensor  args: ['Node', 'Node']
    schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor
  aten.pow.Tensor_Tensor  args: ['Node', 'Node']
    schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor
  aten.pow.Tensor_Tensor  args: ['Node', 'Node']
    schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor
=== [VERIFY] Done. Pausing for inspection... ===
Press Enter to continue...
=== [VERIFY] Before transform_for_export_pipeline (forward) ===
  BEFORE: aten.pow.Tensor_Tensor
  BEFORE: aten.pow.Tensor_Tensor
  BEFORE: aten.pow.Tensor_Tensor
  BEFORE: aten.pow.Tensor_Tensor
  BEFORE: aten.pow.Tensor_Tensor
  BEFORE: aten.pow.Tensor_Tensor
  BEFORE: aten.pow.Tensor_Tensor
  BEFORE: aten.pow.Tensor_Tensor
=== [VERIFY] Before transform done ===
Press Enter to continue...[WARNING] the 0 th arg of node arange is NumberType, might need to lift
[WARNING] the 1 th arg of node arange is NumberType, might need to lift

=== [VERIFY] After transform_for_export_pipeline (forward) ===
  AFTER: aten.pow.Tensor_Tensor
  AFTER: aten.pow.Tensor_Tensor
  AFTER: aten.pow.Tensor_Tensor
  AFTER: aten.pow.Tensor_Tensor
  AFTER: aten.pow.Tensor_Tensor
  AFTER: aten.pow.Tensor_Tensor
  AFTER: aten.pow.Tensor_Tensor
  AFTER: aten.pow.Tensor_Tensor
=== [VERIFY] After transform done ===
Press Enter to continue...

- [x] Unit test test_qnn_backend_pow_scalar in

TestQNNFloatingPointOperator passes with x86_64 backend:

$ python backends/qualcomm/tests/test_qnn_delegate.py -k "test_qnn_backend_pow" --soc_model SM8650 --build_folder build-x86/ --executorch_root . --enable_x86_64

Covers PowScalar models with base ∈ {2.0, 3.0, 9, 0.5} —
integer/square-root exponents where PyTorch and QNN produce matching results.

Test result

...
----------------------------------------------------------------------
Ran 4 tests in 22.949s

OK

Release notes: qualcomm


🤖 Generated with Claude Code

cc @cccclai @cbilgin @abhinaykukkadapu

LiftConstantScalarOperands skips aten.pow.Scalar nodes because the op
is absent from SCALAR_OPS. After the pass runs, these nodes remain in
the graph and the QNN partitioner raises KeyError since no visitor is
registered for aten.pow.Scalar.

Additionally, _build_tensor_constant crashes with
AttributeError: 'float' object has no attribute 'meta' when processing
aten.pow.Scalar, because args[0] is a Python float (the scalar base)
rather than an fx.Node.

Fixes:
- Register aten.pow.Scalar -> aten.pow.Tensor_Tensor in SCALAR_OPS,
  matching the existing aten.pow.Tensor_Scalar entry. The schema
  (Scalar base, Tensor exponent) correctly maps to
  aten.pow.Tensor_Tensor(Tensor base, Tensor exponent).
- Guard _build_tensor_constant dtype lookup with isinstance(first_arg,
  fx.Node) so non-Node args (float, int) fall through to the default
  node.meta["val"].dtype path.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 20, 2026 04:46
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 20, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18994

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure, 2 Unrelated Failures

As of commit 0c4b191 with merge base 273888f (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 20, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a QNN PT2E export failure where LiftConstantScalarOperands missed aten.pow.Scalar, leaving unsupported nodes for the QNN partitioner and potentially crashing when the scalar operand is a Python float.

Changes:

  • Register aten.pow.Scalar in SCALAR_OPS, mapping it to aten.pow.Tensor_Tensor.
  • Guard _build_tensor_constant dtype selection so non-fx.Node operands (e.g., Python floats) don’t cause attribute errors.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

aten.sub.Scalar: TensorOpInfo(aten.sub.Tensor, False, False),
aten.sub.Tensor: TensorOpInfo(aten.sub.Tensor, False, False),
aten.pow.Tensor_Scalar: TensorOpInfo(aten.pow.Tensor_Tensor, False, False),
aten.pow.Scalar: TensorOpInfo(aten.pow.Tensor_Tensor, False, False),
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add/extend a unit test that exercises aten.pow.Scalar so this regression stays fixed (e.g., export a tiny module using torch.pow(2.0, x), run LiftConstantScalarOperands, and assert the node target becomes aten.pow.Tensor_Tensor and the pass doesn’t crash when args[0] is a Python float).

Copilot uses AI. Check for mistakes.
@KevinUW114514
Copy link
Copy Markdown
Contributor Author

KevinUW114514 commented Apr 20, 2026

@pytorchbot label "release notes: module:qnn"

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 20, 2026

Didn't find following labels among repository labels: release notes: backend:qualcomm

@KevinUW114514 KevinUW114514 changed the title Fix LiftConstantScalarOperands to handle aten.pow.Scalar [QNN] Fix LiftConstantScalarOperands to handle aten.pow.Scalar Apr 20, 2026
@KevinUW114514
Copy link
Copy Markdown
Contributor Author

@pytorchbot label "release notes: module:qnn"

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 20, 2026

Didn't find following labels among repository labels: release notes: module:qnn

@KevinUW114514
Copy link
Copy Markdown
Contributor Author

@pytorchbot label "release notes: module: qnn"

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 20, 2026

Didn't find following labels among repository labels: release notes: module: qnn

@KevinUW114514
Copy link
Copy Markdown
Contributor Author

@pytorchbot label "release notes" "module: qnn"

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 20, 2026

Didn't find following labels among repository labels: release notes

@pytorch-bot pytorch-bot Bot added the module: qnn Issues related to Qualcomm's QNN delegate and code under backends/qualcomm/ label Apr 20, 2026
Copy link
Copy Markdown
Collaborator

@shewu-quic shewu-quic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contributing.
Could you please help to add a test case for aten.pow.Scalar into test_qnn_delegate.py: TestQNNFloatingPointOperator and test_qnn_delegate.py: TestQNNQuantizedOperator?
Such as this test case

def test_qnn_backend_pow_tensor_scalar(self):

@KevinUW114514
Copy link
Copy Markdown
Contributor Author

Hi @shewu-quic , I have uploaded new test cases. Thanks for the help!

Copilot AI review requested due to automatic review settings April 20, 2026 14:51
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +1722 to +1723
PowScalar(9), # base=9, integer exp case # noqa: F405
PowScalar(0.5), # base=0.5, fractional case # noqa: F405
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The inline comments describe these values as exponent-related (e.g., "integer exp case"), but in PowScalar the varying constructor argument is the scalar base (the exponent is the tensor input). Update the comments to refer to the base to avoid confusion when reading/fixing test failures.

Suggested change
PowScalar(9), # base=9, integer exp case # noqa: F405
PowScalar(0.5), # base=0.5, fractional case # noqa: F405
PowScalar(9), # base=9, integer base case # noqa: F405
PowScalar(0.5), # base=0.5, fractional base case # noqa: F405

Copilot uses AI. Check for mistakes.
Comment on lines +4226 to +4227
PowScalar(9), # base=9, integer exp case # noqa: F405
PowScalar(0.5), # base=0.5, fractional case # noqa: F405
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the floating-point test: these comments describe exponent cases, but the varied constructor argument in PowScalar(...) is the scalar base. Please adjust the comments to refer to the base to prevent misunderstanding.

Suggested change
PowScalar(9), # base=9, integer exp case # noqa: F405
PowScalar(0.5), # base=0.5, fractional case # noqa: F405
PowScalar(9), # base=9, integer base case # noqa: F405
PowScalar(0.5), # base=0.5, fractional base case # noqa: F405

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

@shewu-quic shewu-quic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Apr 22, 2026

@abhinaykukkadapu has imported this pull request. If you are a Meta employee, you can view this in D102019844.

@abhinaykukkadapu abhinaykukkadapu merged commit 0919746 into pytorch:main Apr 22, 2026
170 of 173 checks passed
Gasoonjia pushed a commit that referenced this pull request Apr 23, 2026
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: qnn Issues related to Qualcomm's QNN delegate and code under backends/qualcomm/

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[QNN] KeyError: 'aten.pow.Scalar' — LiftConstantScalarOperands misses aten.pow.Scalar in SCALAR_OPS (and crashes on float arg)

5 participants