Hypervolume Knowledge Gradient (HVKG)
New features
Hypervolume Knowledge Gradient (HVKG):
- Add
qHypervolumeKnowledgeGradient, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (#1950, #1982, #2101). - Add tutorial on decoupled Multi-Objective Bayesian Optimization (MOBO) with HVKG (#2094).
Other new features:
- Add
MultiOutputFixedCostModel, which is useful for decoupled scenarios where the objectives have different costs (#2093). - Enable
q > 1in acquisition function optimization when nonlinear constraints are present (#1793). - Support different noise levels for different outputs in test functions (#2136).
Bug fixes
- Fix fantasization with a
FixedNoiseGaussianLikelihoodwhennoiseis known andXis empty (#2090). - Make
LearnedObjectivecompatible with constraints in acquisition functions regardless ofsample_shape(#2111). - Make input constructors for
qExpectedImprovement,qLogExpectedImprovement, andqProbabilityOfImprovementcompatible withLearnedObjectiveregardless ofsample_shape(#2115). - Fix handling of constraints in
qSimpleRegret(#2141).
Other changes
- Increase default sample size for
LearnedObjective(#2095). - Allow passing in
Xwith or without fidelity dimensions inproject_to_target_fidelity(#2102). - Use full-rank task covariance matrix by default in SAAS MTGP (#2104).
- Rename
FullyBayesianPosteriortoGaussianMixturePosterior; add_is_ensembleand_is_fully_bayesianattributes toModel(#2108). - Various improvements to tutorials including speedups, improved explanations, and compatibility with newer versions of libraries.