dsipts.models.RNN module

class dsipts.models.RNN.MyBN(channels)[source]

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(channels)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.RNN.RNN(past_steps, future_steps, past_channels, future_channels, embs, cat_emb_dim, hidden_RNN, num_layers_RNN, kind, kernel_size, sum_emb, out_channels, activation='torch.nn.ReLU', remove_last=False, dropout_rate=0.1, use_bn=False, num_blocks=4, bidirectional=True, lstm_type='slstm', persistence_weight=0.0, loss_type='l1', quantiles=[], n_classes=0, optim=None, optim_config=None, scheduler_config=None, **kwargs)[source]

Bases: Base

Recurrent model with an encoder decoder structure

Parameters:
  • past_steps (int) – number of past datapoints used

  • future_steps (int) – number of future lag to predict

  • past_channels (int) – number of numeric past variables, must be >0

  • future_channels (int) – number of future numeric variables

  • embs (List) – list of the initial dimension of the categorical variables

  • cat_emb_dim (int) – final dimension of each categorical variable

  • hidden_RNN (int) – hidden size of the RNN block

  • num_layers_RNN (int) – number of RNN layers

  • kind (str) – one among GRU or LSTM

  • kernel_size (int) – kernel size in the encoder convolutional block

  • sum_emb (bool) – if true the contribution of each embedding will be summed-up otherwise stacked

  • out_channels (int) – number of output channels

  • activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU

  • remove_last (bool, optional) – if True the model learns the difference respect to the last seen point

  • dropout_rate (float, optional) – dropout rate in Dropout layers

  • use_bn (bool, optional) – if true BN layers will be added and dropouts will be removed

  • num_blocks (int, optional) – number of xLSTM blocks (only for xlstm), default 4

  • bidirectional (bool, optional) – if True the RNN are bidirectional, default True

  • lstm_type (str, optional) – only for xLSTM (slstm or mlstm)

  • persistence_weight (float) – weight controlling the divergence from persistence model. Default 0

  • loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,

  • quantiles (List[int], optional) – we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].

  • n_classes (int) – number of classes (0 in regression)

  • optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.

  • optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.

  • scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.

handle_multivariate = True
handle_future_covariates = True
handle_categorical_variables = True
handle_quantile_loss = True
__init__(past_steps, future_steps, past_channels, future_channels, embs, cat_emb_dim, hidden_RNN, num_layers_RNN, kind, kernel_size, sum_emb, out_channels, activation='torch.nn.ReLU', remove_last=False, dropout_rate=0.1, use_bn=False, num_blocks=4, bidirectional=True, lstm_type='slstm', persistence_weight=0.0, loss_type='l1', quantiles=[], n_classes=0, optim=None, optim_config=None, scheduler_config=None, **kwargs)[source]

Recurrent model with an encoder decoder structure

Parameters:
  • past_steps (int) – number of past datapoints used

  • future_steps (int) – number of future lag to predict

  • past_channels (int) – number of numeric past variables, must be >0

  • future_channels (int) – number of future numeric variables

  • embs (List) – list of the initial dimension of the categorical variables

  • cat_emb_dim (int) – final dimension of each categorical variable

  • hidden_RNN (int) – hidden size of the RNN block

  • num_layers_RNN (int) – number of RNN layers

  • kind (str) – one among GRU or LSTM

  • kernel_size (int) – kernel size in the encoder convolutional block

  • sum_emb (bool) – if true the contribution of each embedding will be summed-up otherwise stacked

  • out_channels (int) – number of output channels

  • activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU

  • remove_last (bool, optional) – if True the model learns the difference respect to the last seen point

  • dropout_rate (float, optional) – dropout rate in Dropout layers

  • use_bn (bool, optional) – if true BN layers will be added and dropouts will be removed

  • num_blocks (int, optional) – number of xLSTM blocks (only for xlstm), default 4

  • bidirectional (bool, optional) – if True the RNN are bidirectional, default True

  • lstm_type (str, optional) – only for xLSTM (slstm or mlstm)

  • persistence_weight (float) – weight controlling the divergence from persistence model. Default 0

  • loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,

  • quantiles (List[int], optional) – we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].

  • n_classes (int) – number of classes (0 in regression)

  • optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.

  • optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.

  • scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.

forward(batch)[source]

It is mandatory to implement this method

Parameters:

batch (dict) – batch of the dataloader

Returns:

result

Return type:

torch.tensor