notebook

If you are using a Jupyter notebook, the notebook functions may come in handy to evaluate models and view results more easily.

results_vis()

src.scalecast.notebook.results_vis(f_dict: Dict[str, Forecaster], plot_type: str = 'forecast', include_train: bool | int = True, figsize=(12, 6))

Visualize the forecast results from many different Forecaster objects leveraging Jupyter widgets.

Parameters:
  • f_dict (dict[str,Forecaster]) – Dictionary of forcaster objects. Works best if two or more models have been evaluated in each dictionary value.

  • plot_type (str) – One of {“forecast”,”test”}. Default “forecast”. The type of results to visualize.

  • include_train (bool or int) – Optional. Whether to include the complete training set in the plot or how many traning-set observations to include. Passed to include_train parameter when plot_type = ‘test’. Ignored when plot_type = ‘forecast’.

  • figsize (tuple) – Default (12,6). Size of the resulting figure.

Returns:

None

from scalecast.Forecaster import Forecaster
from scalecast import GridGenerator
from scalecast.notebook import tune_test_forecast, results_vis
import pandas_datareader as pdr # !pip install pandas-datareader
import matplotlib.pyplot as plt
import seaborn as sns

sns.set(rc={"figure.figsize": (12, 8)})

f_dict = {}
models = ('mlr','elasticnet','mlp')
GridGenerator.get_example_grids() # writes the Grids.py file to your working directory

for sym in ('UNRATE','GDP'):
  df = pdr.get_data_fred(sym, start = '2000-01-01')
  f = Forecaster(y=df[sym],current_dates=df.index)
  f.generate_future_dates(12) # forecast 12 periods to the future
  f.set_test_length(12) # test models on 12 periods
  f.set_validation_length(4) # validate on the previous 4 periods
  f.add_time_trend()
  f.add_seasonal_regressors('quarter',raw=False,dummy=True)
  tune_test_forecast(f,models) # adds a progress bar that is nice for notebooks
  f_dict[sym] = f

results_vis(f_dict) # toggle through results with jupyter widgets

results_vis_mv()

src.scalecast.notebook.results_vis_mv(f_dict: Dict[str, MVForecaster], plot_type='forecast', include_train=True, figsize=(12, 6))

Visualize the forecast results from many different MVForecaster objects leveraging Jupyter widgets.

Parameters:
  • f_dict (dict[str,MVForecaster]) – Dictionary of forcaster objects. Works best if two or more models have been evaluated in each dictionary value.

  • plot_type (str) – One of {“forecast”,”test”}. Default “forecast”. The type of results to visualize.

  • include_train (bool or int) – Optional. Whether to include the complete training set in the plot or how many traning-set observations to include. Passed to include_train parameter when plot_type = ‘test’. Ignored when plot_type = ‘forecast’.

  • figsize (tuple) – Default (12,6). Size of the resulting figure.

Returns:

None

tune_test_forecast()

src.scalecast.notebook.tune_test_forecast(f, models, cross_validate=False, dynamic_tuning=False, dynamic_testing=True, summary_stats=False, feature_importance=False, fi_try_order=None, limit_grid_size=None, min_grid_size=1, suffix=None, error='raise', **cvkwargs)

Tunes, tests, and forecasts a series of models with a progress bar through tqdm.

Parameters:
  • f (Forecaster or MVForecaster) – The object to run the models through.

  • models (list-like) – Each element must be in Forecaster.can_be_tuned.

  • cross_validate (bool) – Default False. Whether to tune the model with cross validation. If False, uses the validation slice of data to tune.

  • dynamic_tuning (bool or int) – Default False. Whether to dynamically tune the model or, if int, how many forecast steps to dynamically tune it.

  • dynamic_testing (bool or int) – Default True. Whether to dynamically/recursively test the forecast (meaning AR terms will be propogated with predicted values). If True, evaluates recursively over the entire out-of-sample slice of data. If int, window evaluates over that many steps (2 for 2-step recurvie testing, 12 for 12-step, etc.). Setting this to False or 1 means faster performance, but gives a less-good indication of how well the forecast will perform more than one period out.

  • summary_stats (bool) – Default False. Whether to save summary stats for the models that offer those. Does not work for MVForecaster objects.

  • feature_importance (bool) – Default False. Whether to save feature importance information for the models that offer it. Does not work for MVForecaster objects.

  • fi_try_order (list) – Optional. If the feature_importance argument is True, what feature importance methods to try? If using a combination of tree-based and linear models, for example, it might be good to pass [‘TreeExplainer’,’LinearExplainer’]. The default will use whatever is specifiec by default in Forecaster.save_feature_importance(), which usually ends up being the PermutationExplainer.

  • limit_grid_size (int or float) – Optional. Pass an argument here to limit each of the grids being read. See https://scalecast.readthedocs.io/en/latest/Forecaster/Forecaster.html#src.scalecast.Forecaster.Forecaster.limit_grid_size.

  • min_grid_size (int) – Default 1. The smallest grid size to keep. Ignored if limit_grid_size is None.

  • suffix (str) – Optional. A suffix to add to each model as it is evaluated to differentiate them when called later. If unspecified, each model can be called by its estimator name.

  • error (str) – One of ‘ignore’,’raise’,’warn’; default ‘raise’. What to do with the error if a given model fails. ‘warn’ prints a warning that the model could not be evaluated.

  • **cvkwargs – Passed to the cross_validate() method.

Returns:

None

from scalecast.notebook import tune_test_forecast
models = ('arima','mlr','mlp')
tune_test_forecast(f,models) # displays a progress bar through tqdm