Skip to content

feat(api,robot-server): add offsets to analyses#21090

Merged
sfoster1 merged 8 commits intochore_release-9.0.0from
exec-2418-add-offsets-to-analysis
Mar 24, 2026
Merged

feat(api,robot-server): add offsets to analyses#21090
sfoster1 merged 8 commits intochore_release-9.0.0from
exec-2418-add-offsets-to-analysis

Conversation

@sfoster1
Copy link
Copy Markdown
Member

@sfoster1 sfoster1 commented Mar 23, 2026

Overview

Protocol run records offer a view of all labware offsets that were loaded into a run. Protocol analyses however do not, for some reason. Likely this was because when we run analysis, we don't load offsets from the user's robot (there may not be one, after all, and even if there is we want a more general view) - but the protocol itself may load offsets through the use of set_offset().

The app really wants to know which labware have offsets set through set_offset(); we display UI about them to note that doing LPC for them is pointless. Today, that happens by looking at the labware key of the protocol analysis, and displaying that UI for any labware that have an offset ID set in the analysis.

The thing is, the labware key essentially captures a snapshot of the engine's labware state at the point where the analysis ends. Since offsets are based on the current location of a labware, if the labware was in previous locations that (did or didn't) have an offset and ends the protocol in a position that (doesn't or does) have an offset, then the offset ID being present or absent is meaningless - in general, information in the labware key just isn't generally applicable to its contents throughout the protocol. This leads to many bugs about when labware is displayed in the app as having a hardcoded offset.

Since we actually have the information, let's just... put it in the analysis. This PR adds a labwareOffsets key to analysis records, just like the run records have, that contains the same information as the run records. In the common case of a protocol having no set_offset() calls, the key will be empty, because the client isn't loading in a bunch of offsets; but if the protocol uses set_offset(), the resulting labware offset with its ID, the labware URI to which it applies, and the location (by vector and by dict) to which it applies will be present in the analysis for each offset that was set, including those that are no longer in active use by the time the protocol ends.

Test Plan and Hands on Testing

  • Run the analysis CLI on a protocol that uses set_offset() and check that the offset is included
  • Put the branch on a robot with pre-existing analyses and check that stored analyses without the key are still rendered (the default is to have an empty list)
  • Put the branch on a robot and upload a protocol that uses set_offset; check that the offset is included in the analysis

Changelog

  • Add labwareOffsets: List[LabwareOffset] to the analysis CLI results object and the robot server analysis results object
  • Add TS types in shared-data that list the new key as optional so it can handle previously-generated analyses

Review requests

  • Just whether this is a good idea, I guess? see the risk assessment for more, but this adds data that already exists in the engine state store to the analysis docs without altering it in any way.

Risk assessment

Low on the face of it, maybe medium for altering analysis schemata that apply to stored analyses.

This doesn't actually close the below ticket but it does pertain to it.

Closes EXEC-2418

The values of labware offsets used in a run are available in a run
record, but they're not available in analysis. This is probably because
analyses don't run with offsets provided from clients. The thing is
though that analyses can add their _own_ offsets using the
labware.set_offset() method in a protocol, and clients want to know what
those offsets are for. If labware offsets aren't present in analysis,
since set_offset() only runs _after_ a labware is loaded the only way to
see offsets inside a protocol is to look at the labware object in an
analysis and see an offset ID there. That offset ID is only correct for
the deck state captured in the labware object, though, which is the deck
state that the run _ends with_. So if a labware were in locations A, B,
and C; and A had a .set_offset() call, and B had a .set_offset() call,
but C didn't, then it would seem like there was no offset.

We can just... put the labware offsets we'd put into a run into the
analysis too, though, and that completely solves the issue: interested
clients look at what offsets the analysis ended up with, and those can
only be offsets added by set_offset(), and they can display them.
Add the labware offset lists added in the previous commit to the robot
server analysis results that go over HTTP.
Adds the labware offset changes from the previous commands to the
typescript types for analyses.
@sfoster1 sfoster1 requested a review from a team as a code owner March 23, 2026 19:36
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 23, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 56.28%. Comparing base (f50fca9) to head (e6351b3).
⚠️ Report is 2 commits behind head on chore_release-9.0.0.

Additional details and impacted files

Impacted file tree graph

@@                   Coverage Diff                   @@
##           chore_release-9.0.0   #21090      +/-   ##
=======================================================
+ Coverage                55.85%   56.28%   +0.42%     
=======================================================
  Files                     3924     3924              
  Lines                   328690   328698       +8     
  Branches                 48349    48353       +4     
=======================================================
+ Hits                    183603   185011    +1408     
+ Misses                  144870   143468    -1402     
- Partials                   217      219       +2     
Flag Coverage Δ
app 45.72% <ø> (-0.01%) ⬇️
step-generation 5.74% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.
see 40 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Contributor

@mjhuff mjhuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yay, thank you!

Copy link
Copy Markdown
Contributor

@SyntaxColoring SyntaxColoring left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me. Just nitpicks. Thanks!

Comment thread api/tests/opentrons/cli/test_cli.py Outdated
Comment thread shared-data/js/types.ts
@SyntaxColoring
Copy link
Copy Markdown
Contributor

Also watch out for the analysis snapshot tests. Sometimes I see them time out, and they end up as "cancelled" instead of "failed," which is easy to miss.

@sfoster1 sfoster1 added the gen-analyses-snapshot-pr Generate a healing PR if the analyses snapshot test fails label Mar 23, 2026
@github-actions
Copy link
Copy Markdown
Contributor

A PR has been opened to address analyses snapshot changes. Please review the changes here: #21094

github-actions bot and others added 2 commits March 23, 2026 16:46
…snapshots (#21094)

This PR was requested on the PR
#21090

Co-authored-by: sfoster1 <3091648+sfoster1@users.noreply.github.com>
@sfoster1 sfoster1 merged commit eafd5ff into chore_release-9.0.0 Mar 24, 2026
55 of 56 checks passed
@sfoster1 sfoster1 deleted the exec-2418-add-offsets-to-analysis branch March 24, 2026 14:47
mjhuff added a commit that referenced this pull request Mar 24, 2026
Closes EXEC-2418

The LPC labware offsets table attempts to label certain offsets as "hardcoded", those offsets that are set using set_offset() when authoring a protocol. PR #21090 provides an in-depth overview of the issue and works towards a fix by including labwareOffsets on completed protocol analyses.

This commit implements the UI fix by updating LPC's utilities to check if a hardcoded offset is present for a particular labware in a given slot, making use of labwareOffsets.

Note that modern LPC provies offsets to types of labware (labware URIs) and not specific labware instances, and the utility and UI cannot make affordances for the set_offset() behavior present in API versions 2.12-2.13.
SyntaxColoring added a commit that referenced this pull request Mar 30, 2026
This fixes a test failing in edge. It looks like it was just some kind of conflict with #21090.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

gen-analyses-snapshot-pr Generate a healing PR if the analyses snapshot test fails

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants