pg.tuning.Feedback

Accessible via pg.tuning.Feedback.

class Feedback(metrics_to_optimize)[source]

Bases: object

Interface for the feedback object for a trial.

Feedback object is an agent to communicate to the search algorithm and other workers based on current trial, which includes:

  • Information about current example:

    • id: The ID of current example, started from 1.

    • dna: The DNA for current example.

  • Methods to communicate with the search algorithm:

    • add_measurement: Add a measurement for current example. Multiple measurements can be added as progressive evaluation of the example, which can be used by the early stopping policy to suggest whether current evaluation can be stopped early.

    • done: Mark evaluation on current example as done, use the reward from the latest measurement to feedback to the algorithm, and move to the next example.

    • __call__: A shortcut method that calls add_measurement and done in sequence.

    • skip: Mark evaluation on current example as done, and move to the next example without providing feedback to the algorithm.

    • should_stop_early: Tell if progressive evaluation on current example can be stopped early.

    • end_loop: Mark the loop as done. All workers will get out of the loop after they finish evaluating their current examples.

  • Methods to publish information associated with current trial:

Methods:

add_link(name, url)

Adds a related link to current trial.

add_measurement([reward, metrics, step, ...])

Add a measurement for current trial.

done([metadata, related_links])

Marks current trial as done.

end_loop()

Ends current sapling loop.

get_metadata(key[, per_trial])

Gets metadata for current trial or current sampling.

get_trial()

Gets current Trial.

ignore_race_condition()

Context manager for ignoring RaceConditionError within the scope.

set_metadata(key, value[, per_trial])

Sets metadata for current trial or current sampling.

should_stop_early()

Whether progressive evaluation can be stopped early on current trial.

skip([reason])

Move to next example without providing the feedback to the algorithm.

skip_on_exceptions(exceptions)

Returns a context manager to skip trial on user-specified exceptions.

Attributes:

checkpoint_to_warm_start_from

Gets checkpoint path to warm start from.

dna

Gets the DNA of the example used in current trial.

id

Gets the ID of current trial.

Adds a related link to current trial.

Added links can be retrieved from the Trial.related_links property via pg.poll_result.

Return type:

None

Parameters:
  • name – Name for the related link.

  • url – URL for this link.

add_measurement(reward=None, metrics=None, step=0, checkpoint_path=None, elapse_secs=None)[source]

Add a measurement for current trial.

This method can be called multiple times on the same trial, e.g:

for model, feedback in pg.sample(...):
  accuracy = train_and_evaluate(model, step=10)
  feedback.add_measurement(accuracy, step=10)
  accuracy = train_and_evaluate(model, step=15)
  feedback.add_measurement(accuracy, step=25)
  feedback.done()
Return type:

None

Parameters:
  • reward – An optional float value as the reward for single-objective optimization, or a sequence of float values for multiple objectives optimization. In multiple-objective scenario, the float sequence will be paired up with the metrics_to_optimize argument of pg.sample, thus their length must be equal. Another way for providing reward for multiple-objective reward is through the metrics argument, which is a dict using metric name as key and its measure as value (the key should match with an element of the metrics_to_optimize argument). When multi-objective reward is provided from both the reward argument (via a sequence of float) and the metrics argument, their value should agree with each other.

  • metrics – An optional dictionary of string to float as metrics. It can be used to provide metrics for multi-objective optimization, and/or carry additional metrics for study analysis.

  • step – An optional integer as the step (e.g. step for model training), at which the measurement applies. When a trial is completed, the measurement at the largest step will be chosen as the final measurement to feed back to the controller.

  • checkpoint_path – An optional string as the checkpoint path produced from the evaluation (e.g. training a model), which can be used in transfer learning.

  • elapse_secs – Time spent on evaluating current example so far. If None, it will be automatically computed by the backend.

abstract property checkpoint_to_warm_start_from: str | None[source]

Gets checkpoint path to warm start from.

abstract property dna: DNA[source]

Gets the DNA of the example used in current trial.

abstract done(metadata=None, related_links=None)[source]

Marks current trial as done.

Return type:

None

Parameters:
  • metadata – Additional metadata to add to current trial.

  • related_links – Additional links to add to current trial.

abstract end_loop()[source]

Ends current sapling loop.

Return type:

None

abstract get_metadata(key, per_trial=True)[source]

Gets metadata for current trial or current sampling.

Return type:

Optional[Any]

Parameters:
  • key – A string as key to metadata.

  • per_trial – If True, the key is retrieved per curent trial. Otherwise, it is retrieved per current sampling.

Returns:

A value that can be deserialized by pg.from_json_str.

abstract get_trial()[source]

Gets current Trial.

Return type:

pg.tuning.Trial

Returns:

An up-to-date Trial object. A distributed tuning backend should make sure the return value is up-to-date not only locally, but among different workers.

abstract property id: int[source]

Gets the ID of current trial.

ignore_race_condition()

Context manager for ignoring RaceConditionError within the scope.

Race condition may happen when multiple workers are working on the same trial (e.g. paired train/eval processes). Assuming there are two co-workers (X and Y), common race conditions are:

  1. Both X and Y call feedback.done or feedback.skip to the same trial.

  2. X calls feedback.done/feedback.skip, then B calls

feedback.add_measurement.

Users can use this context manager to simplify the code for handling multiple co-workers. (See the group argument of pg.sample)

Usages::

feedback = … def thread_fun():

with feedback.ignore_race_condition():

feedback.add_measurement(0.1)

# Multiple workers working on the same trial might trigger this code # from different processes. feedback.done()

x = threading.Thread(target=thread_fun) x.start() y = threading.Thread(target=thread_fun) y.start()

Yields:

None.

abstract set_metadata(key, value, per_trial=True)[source]

Sets metadata for current trial or current sampling.

Metadata can be used in two use cases: :rtype: None

  • Worker processes that co-work on the same trial can use meta-data to communicate with each other.

  • Worker use metadata as a persistent store to save information for current trial, which can be retrieved via poll_result method later.

Parameters:
  • key – A string as key to metadata.

  • value – A value that can be serialized by pg.to_json_str.

  • per_trial – If True, the key is set per current trial. Otherwise, it is set per current sampling loop.

abstract should_stop_early()[source]

Whether progressive evaluation can be stopped early on current trial.

Return type:

bool

abstract skip(reason=None)[source]

Move to next example without providing the feedback to the algorithm.

Return type:

None

skip_on_exceptions(exceptions)[source]

Returns a context manager to skip trial on user-specified exceptions.

Usages:

with feedback.skip_on_exceptions((ValueError, KeyError)):
  ...

with feedback.skip_on_exceptions(((ValueError, 'bad value for .*'),
                                  (ValueError, '.* invalid range'),
                                  TypeError)):
  ...
Parameters:

exceptions – A sequence of (exception type, or exception type plus regular expression for error message).

Returns:

A context manager for skipping trials on user-specified exceptions.