ApproximateGP vs ExactGP Performance Discrepancy in Bayesian Optimization Contexts #2632
Replies: 2 comments 2 replies
-
How many inducing points are you using and how did you select them?
At the end of the day the As to the training of the approximate model: I don't see anything wrong at first glance, but one would have to take a closer look at the actual results (e.g. learning curve) to understand what one might want to tweak. |
Beta Was this translation helpful? Give feedback.
-
Couple add-ons here since I use variational GPs almost exclusively in my BO problems:
|
Beta Was this translation helpful? Give feedback.
-
I've encountered unexpected performance differences between ApproximateGP and ExactGP models in Bayesian optimization contexts. Despite using identical priors and initial training points, the ApproximateGP model consistently underperforms compared to ExactGP when using fixed inducing point locations at the data.
I'm wondering if I've made some error in my model construction or training loop that might explain this discrepancy or whether there is reason to expect different performances from these models. If there are any best practices for training the approximate models I would greatly appreciate it. If there are other bits of code that I could provide to help identify the problem, please let me know. Below is my implementation for review:
Model classes:
Training loop for the approximate model. The ExactGP is trained with
fit_gpytorch_mll(mll)
Beta Was this translation helpful? Give feedback.
All reactions