dsipts.models.CrossFormer module

class dsipts.models.CrossFormer.CrossFormer(past_steps, future_steps, past_channels, future_channels, d_model, embs, hidden_size, n_head, seg_len, n_layer_encoder, win_size, out_channels, factor=5, remove_last=False, persistence_weight=0.0, loss_type='l1', quantiles=[], dropout_rate=0.1, optim=None, optim_config=None, scheduler_config=None, **kwargs)[source]

Bases: Base

CroosFormer (https://openreview.net/forum?id=vSVLM2j9eie)

Parameters:
  • past_steps (int) – number of past datapoints used , not used here

  • future_steps (int) – number of future lag to predict

  • past_channels (int) – number of numeric past variables, must be >0

  • future_channels (int) – number of future numeric variables

  • d_model (int) – dimension of the attention model

  • embs (List) – list of the initial dimension of the categorical variables

  • hidden_size (int) – hidden size of the linear block

  • n_head (int) – number of heads

  • seg_len (int) – segment length (L_seg) see the paper for more details

  • n_layer_encoder (int) – layers to use in the encoder

  • win_size (int) – window size for segment merg

  • factor (int) – num of routers in Cross-Dimension Stage of TSA (c) see the paper

  • remove_last (boolean,optional) – if true the model try to predic the difference respect the last observation.

  • out_channels (int) – number of output channels

  • persistence_weight (float) – weight controlling the divergence from persistence model. Default 0

  • loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,

  • loss_type – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,

  • quantiles (List[int], optional) – NOT USED YET

  • dropout_rate (float, optional) – dropout rate in Dropout layers. Defaults to 0.1.

  • optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.

  • optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.

  • scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.

handle_multivariate = True
handle_future_covariates = False
handle_categorical_variables = False
handle_quantile_loss = False
description = 'Can   handle multivariate output \nCan NOT  handle future covariates\nCan NOT  handle categorical covariates\nCan NOT  handle Quantile loss function'
__init__(past_steps, future_steps, past_channels, future_channels, d_model, embs, hidden_size, n_head, seg_len, n_layer_encoder, win_size, out_channels, factor=5, remove_last=False, persistence_weight=0.0, loss_type='l1', quantiles=[], dropout_rate=0.1, optim=None, optim_config=None, scheduler_config=None, **kwargs)[source]

CroosFormer (https://openreview.net/forum?id=vSVLM2j9eie)

Parameters:
  • past_steps (int) – number of past datapoints used , not used here

  • future_steps (int) – number of future lag to predict

  • past_channels (int) – number of numeric past variables, must be >0

  • future_channels (int) – number of future numeric variables

  • d_model (int) – dimension of the attention model

  • embs (List) – list of the initial dimension of the categorical variables

  • hidden_size (int) – hidden size of the linear block

  • n_head (int) – number of heads

  • seg_len (int) – segment length (L_seg) see the paper for more details

  • n_layer_encoder (int) – layers to use in the encoder

  • win_size (int) – window size for segment merg

  • factor (int) – num of routers in Cross-Dimension Stage of TSA (c) see the paper

  • remove_last (boolean,optional) – if true the model try to predic the difference respect the last observation.

  • out_channels (int) – number of output channels

  • persistence_weight (float) – weight controlling the divergence from persistence model. Default 0

  • loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,

  • loss_type – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,

  • quantiles (List[int], optional) – NOT USED YET

  • dropout_rate (float, optional) – dropout rate in Dropout layers. Defaults to 0.1.

  • optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.

  • optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.

  • scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.

forward(batch)[source]

Forlward method used during the training loop

Parameters:

batch (dict) – the batch structure. The keys are: y : the target variable(s). This is always present x_num_past: the numerical past variables. This is always present x_num_future: the numerical future variables x_cat_past: the categorical past variables x_cat_future: the categorical future variables idx_target: index of target features in the past array

Returns:

output of the mode;

Return type:

torch.tensor