dsipts.models.Autoformer module¶
- class dsipts.models.Autoformer.Autoformer(past_steps, future_steps, label_len, past_channels, future_channels, out_channels, d_model, embs, kernel_size, activation='torch.nn.ReLU', factor=5, n_head=1, n_layer_encoder=2, n_layer_decoder=2, hidden_size=1048, persistence_weight=0.0, loss_type='l1', quantiles=[], dropout_rate=0.1, optim=None, optim_config=None, scheduler_config=None, **kwargs)[source]¶
Bases:
Base- Parameters:
past_steps (int) – number of past datapoints used , not used here
future_steps (int) – number of future lag to predict
label_len (int) – overlap len
past_channels (int) – number of numeric past variables, must be >0
future_channels (int) – number of future numeric variables
out_channels (int) – number of output channels
d_model (int) – dimension of the attention model
embs (List) – list of the initial dimension of the categorical variables
embed_type (int) – type of embedding
kernel_size (int) – kernel_size
activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU
n_head (int, optional) – number of heads
n_layer_encoder (int, optional) – number of encoding layers
n_layer_decoder (int, optional) – number of decoding layers
factor (int) – num of routers in Cross-Dimension Stage of TSA (c) see the paper
out_channels – number of output channels
persistence_weight (float) – weight controlling the divergence from persistence model. Default 0
loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,
quantiles (List[int], optional) – quantiles (List[int], optional): we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].
dropout_rate (float, optional) – dropout rate in Dropout layers. Defaults to 0.1.
optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.
optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.
scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.
- handle_multivariate = True¶
- handle_future_covariates = True¶
- handle_categorical_variables = True¶
- handle_quantile_loss = True¶
- description = 'Can handle multivariate output \nCan handle future covariates\nCan handle categorical covariates\nCan handle Quantile loss function'¶
- __init__(past_steps, future_steps, label_len, past_channels, future_channels, out_channels, d_model, embs, kernel_size, activation='torch.nn.ReLU', factor=5, n_head=1, n_layer_encoder=2, n_layer_decoder=2, hidden_size=1048, persistence_weight=0.0, loss_type='l1', quantiles=[], dropout_rate=0.1, optim=None, optim_config=None, scheduler_config=None, **kwargs)[source]¶
- Parameters:
past_steps (int) – number of past datapoints used , not used here
future_steps (int) – number of future lag to predict
label_len (int) – overlap len
past_channels (int) – number of numeric past variables, must be >0
future_channels (int) – number of future numeric variables
out_channels (int) – number of output channels
d_model (int) – dimension of the attention model
embs (List) – list of the initial dimension of the categorical variables
embed_type (int) – type of embedding
kernel_size (int) – kernel_size
activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU
n_head (int, optional) – number of heads
n_layer_encoder (int, optional) – number of encoding layers
n_layer_decoder (int, optional) – number of decoding layers
factor (int) – num of routers in Cross-Dimension Stage of TSA (c) see the paper
out_channels – number of output channels
persistence_weight (float) – weight controlling the divergence from persistence model. Default 0
loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,
quantiles (List[int], optional) – quantiles (List[int], optional): we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].
dropout_rate (float, optional) – dropout rate in Dropout layers. Defaults to 0.1.
optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.
optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.
scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.
- forward(batch)[source]¶
Forlward method used during the training loop
- Parameters:
batch (dict) – the batch structure. The keys are: y : the target variable(s). This is always present x_num_past: the numerical past variables. This is always present x_num_future: the numerical future variables x_cat_past: the categorical past variables x_cat_future: the categorical future variables idx_target: index of target features in the past array
- Returns:
output of the mode;
- Return type:
torch.tensor