[TorchToTosa] add conv reshape in core lowering#4494
Closed
[TorchToTosa] add conv reshape in core lowering#4494
Conversation
Contributor
Author
sahas3
reviewed
Mar 11, 2026
Lallapallooza
requested changes
Mar 12, 2026
Lallapallooza
requested changes
Mar 19, 2026
sahas3
requested changes
Mar 23, 2026
- Insert rank-4/5 reshapes for conv inputs/weights during TorchToTosa lowering Signed-off-by: Cathal Corbett <cathal.corbett@arm.com> Change-Id: Ica1b5cc265822ecd054f832908ec31bc2325c661
Change-Id: I5c0c1a5ae2d90cee500dc76247f5952b99bb48f9 Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I55752f920f8ad170b3d35c0c8bf5f8b94c4d9de0 Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I49732cecaeb7b2ffd3a0e6bf4e74cb0d16aa5e48 Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
…e TOSA lowering Change-Id: Ie4c7767b2155cfa4c81652e60d9698512285de0a Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I9981da1397bbc6221f3f4a6f7c1e1c0f7991bb4b Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: If5a019bc78b81ca7f164f6416f621c0ee1582294 Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
sahas3
reviewed
Mar 27, 2026
| np.asarray(tp.float_data, dtype=np.float32).reshape(tp.dims), signless=False | ||
| ), | ||
| onnx.TensorProto.DataType.BOOL: lambda tp: DenseElementsAttr.get( | ||
| np.asarray(tp.int32_data, dtype=np.bool_).reshape(tp.dims), |
Member
There was a problem hiding this comment.
Is this change intended? It'll be good to separate this change into a different PR.
| %weight_template = tosa.reshape %weight_builtin, %shape_builtin : (tensor<256xf32>, !tosa.shape<4>) -> tensor<1x1x16x16xf32> | ||
| %input_flat = builtin.unrealized_conversion_cast %input_template : tensor<1x1x16x16xf32> to !torch.vtensor<[256],f32> | ||
| %weight_flat = builtin.unrealized_conversion_cast %weight_template : tensor<1x1x16x16xf32> to !torch.vtensor<[256],f32> | ||
| %conv = torch.aten.convolution %input_flat, %weight_flat, %arg2, %stride, %padding, %dilation, %false, %output_padding, %int1 : !torch.vtensor<[256],f32>, !torch.vtensor<[256],f32>, !torch.vtensor<[1],f32>, !torch.list<int>, !torch.list<int>, !torch.list<int>, !torch.bool, !torch.list<int>, !torch.int -> !torch.vtensor<[1,1,1,1],f32> |
Member
There was a problem hiding this comment.
Thanks, @catcor01 for adding these positive testcases. I now understand fully what the proposed change is trying to accomplish.
However, before proceeding further I'd like to understand how such torch IR is being generated -- looking at https://docs.pytorch.org/docs/stable/generated/torch.nn.Conv2d.html it seems that conv only accepts 4D/3D tensors as inputs/weights. Is there an upstream pass that flattens the shape like shown here and is that the bug we should try to resolve? Thanks!
Contributor
Author
|
Closing as not required based on comment #4494 (comment). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This change improves Torch-to-TOSA convolution lowering when convolution
operands are not already in the rank required by TOSA.
The lowering now supports two cases:
and constant convolution attributes when no local template exists
Bias normalization is handled separately and kept TOSA-correct by requiring a
1D bias, or reshaping to 1D when the element count matches the output channel
count.
This is intentionally an internal legalization improvement:
func.funcboundary for real model inputsThe tests now include positive before/after IR coverage for direct local reshape
reuse, inferred input normalization, inferred weight normalization, and bias
normalization, while keeping negative coverage for unsupported cases such as
dynamic element counts.
Change-Id: Ica1b5cc265822ecd054f832908ec31bc2325c661