DiceLoss has a from_logits parameter to handle both logits and probabilities, but FocalLoss does not. This creates an inconsistency when using models with softmax activation and requires awkward workarounds.
model = smp.create_model(..., activation='softmax')
outputs = model(x) # Probabilities [0, 1]
# This works
dice_loss = smp.losses.DiceLoss(mode='multiclass', from_logits=False)
# This fails - FocalLoss applies softmax to output probabilities resulting in incorrect loss calc
focal_loss = smp.losses.FocalLoss(mode='multiclass')
Currently need to either:
- Remove model activation and use logits everywhere
- Manually wrap
FocalLoss to convert probabilities back to logits
FocalLoss should have a from_logits parameter like DiceLoss for consistent API.
Environment
- segmentation-models-pytorch version: 0.5.0
- PyTorch version: 2.7.1
DiceLosshas afrom_logitsparameter to handle both logits and probabilities, butFocalLossdoes not. This creates an inconsistency when using models with softmax activation and requires awkward workarounds.Currently need to either:
FocalLossto convert probabilities back to logitsFocalLossshould have afrom_logitsparameter likeDiceLossfor consistent API.Environment