optuna_integration.AllenNLPPruningCallback
- class optuna_integration.AllenNLPPruningCallback(trial=None, monitor=None)[source]
AllenNLP callback to prune unpromising trials.
See the example if you want to add a pruning callback which observes a metric.
You can also see the tutorial of our AllenNLP integration on AllenNLP Guide.
Note
When
AllenNLPPruningCallback
is instantiated in Python script, trial and monitor are mandatory.On the other hand, when
AllenNLPPruningCallback
is used withAllenNLPExecutor
,trial
andmonitor
would beNone
.AllenNLPExecutor
sets environment variables for a study name, trial id, monitor, and storage. ThenAllenNLPPruningCallback
loads them to restoretrial
andmonitor
.Note
Currently, build-in pruners are supported except for
PatientPruner
.- Parameters:
trial (Trial | None) – A
Trial
corresponding to the current evaluation of the objective function.monitor (str | None) – An evaluation metric for pruning, e.g.
validation_loss
orvalidation_accuracy
.
Warning
Deprecated in v3.5.0. This feature will be removed in the future. The removal of this feature is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/releases/tag/v3.5.0.
Methods
on_epoch
(trainer, metrics, epoch[, is_primary])Check if a training reaches saturation.
register
(*args, **kwargs)Stub method for TrainerCallback.register.
- on_epoch(trainer, metrics, epoch, is_primary=True, **_)[source]
Check if a training reaches saturation.
- classmethod register(*args, **kwargs)
Stub method for TrainerCallback.register.
This method has the same signature as Registrable.register in AllenNLP.