In this talk, Álvaro introduces us to FABOLAS, a Bayesian optimization procedure for hyperparameter tuning “which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost.” This is done with “a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset.”
Extrapolating hyperparameters across dataset size with Bayesian optimisation
References
[Kle17F]
Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets,