You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to benchmark faster against a few other custom seq-2-seq models. The alternative models I'm using are computationally expensive so I'm using a 80-20 split to train and evaluate my models. This means that for the final 20% of the data I'm generating forecasts for a 6 month horizon, shifting the input data 1 to the right, and repeating without updating parameters. Currently to benchmark against fasster, I have something like the following (assuming the .init in stretch_tsibble specifies the training set).
This code estimates a separate fasster model for each .id. Refitting across windows makes the seq-to-seq and fasster models difficult to compare. Furthermore, the new_data argument in forecast() seems to only be useful for exogenous features and not for autoregressive lags/ma. Any advice on how to use a model with fixed parameters to ensure consistency in validation approaches? If not, any plans to fully decouple training and forecasting?
The text was updated successfully, but these errors were encountered:
Hello,
I am trying to benchmark faster against a few other custom seq-2-seq models. The alternative models I'm using are computationally expensive so I'm using a 80-20 split to train and evaluate my models. This means that for the final 20% of the data I'm generating forecasts for a 6 month horizon, shifting the input data 1 to the right, and repeating without updating parameters. Currently to benchmark against fasster, I have something like the following (assuming the .init in stretch_tsibble specifies the training set).
This code estimates a separate fasster model for each .id. Refitting across windows makes the seq-to-seq and fasster models difficult to compare. Furthermore, the new_data argument in forecast() seems to only be useful for exogenous features and not for autoregressive lags/ma. Any advice on how to use a model with fixed parameters to ensure consistency in validation approaches? If not, any plans to fully decouple training and forecasting?
The text was updated successfully, but these errors were encountered: