Replies: 1 comment 1 reply
-
Yes, that would be the right approach here. Unfortunately, this behavior is broken right now due to some in-the-weeds details on how we handle the transforms. #861 gives some details on the issue, including a workaround: #861 (comment). But the truth is that so far we haven't been able to prioritize fixing this for good.
Hmm interesting. I don't know exactly what's going on but I can see how some of the recovery logic in the
Could you provide the full stack trace / repro for this as well? It should definitely be possible to train the model manually, it looks like some inputs are just not getting passed through (may require passing in some of the inputs in your training setup but I'd have to see a repro to see what exactly is going on). |
Beta Was this translation helpful? Give feedback.
-
Hi,
I have some short questions regarding the Heteroskedastic Noise Model. If someone had any insight that would have been of great help.
The first is regarding standardization. As I suppose that the y and Yvar input must maintain their original relationship, and y should be standardized, that means that Yvar cannot be standardized. This is something that botorch complains about through an InputWarning. And I suppose that it should also make it more unstable? Would it not make sense to also include an standardization outcome transform (in addition to the log transform) in the STGP that is employed within the HeteroskedasticSingleTaskGP?
The second question is regarding fitting the HeteroskedasticSingleTaskGP using
fit_gpytorch_mll(ExactMarginalLogLikelihood(model.likelihood, model))
. It works as long as it does not run into numerical issues, but when the multitrial fit routine kicks in it crashes with the following error:Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
I can provide the stack trace if that would help. Lastly, I wanted to implement a simple "manual" optimization routine that I could call if the Scipy opimization crashed as follows:
which crashes with the following error:
TypeError: forward() missing 1 required positional argument: 'x'.
Maybe because the noise model is not set to training somehow? Would it be possible to fit the HeteroscedasticSingleTaskGP manually this way and what would I need to add in that case?Thanks,
Erik
Beta Was this translation helpful? Give feedback.
All reactions