dsipts.models.LinearTS module¶
- class dsipts.models.LinearTS.moving_avg(kernel_size, stride)[source]¶
Bases:
ModuleMoving average block to highlight the trend of time series
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(kernel_size, stride)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class dsipts.models.LinearTS.series_decomp(kernel_size)[source]¶
Bases:
ModuleSeries decomposition block
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(kernel_size)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class dsipts.models.LinearTS.LinearTS(past_steps, future_steps, past_channels, future_channels, embs, cat_emb_dim, kernel_size, sum_emb, out_channels, hidden_size, dropout_rate=0.1, activation='torch.nn.ReLU', kind='linear', use_bn=False, persistence_weight=0.0, loss_type='l1', quantiles=[], n_classes=0, optim=None, optim_config=None, scheduler_config=None, simple=False, **kwargs)[source]¶
Bases:
BaseLinear model from https://github.com/cure-lab/LTSF-Linear/blob/main/run_longExp.py
- Parameters:
past_steps (int) – number of past datapoints used
future_steps (int) – number of future lag to predict
past_channels (int) – number of numeric past variables, must be >0
future_channels (int) – number of future numeric variables
embs (List[int]) – list of the initial dimension of the categorical variables
cat_emb_dim (int) – final dimension of each categorical variable
kernel_size (int) – kernel dimension for initial moving average
sum_emb (bool) – if true the contribution of each embedding will be summed-up otherwise stacked
out_channels (int) – number of output channels
hidden_size (int) – hidden size of the lienar block
dropout_rate (float, optional) – dropout rate in Dropout layers. Default 0.1
activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU
kind (str, optional) – one among linear, dlinear (de-trending), nlinear (differential). Defaults to ‘linear’.
use_bn (bool, optional) – if true BN layers will be added and dropouts will be removed. Default False
quantiles (List[int], optional) – we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].
persistence_weight (float) – weight controlling the divergence from persistence model. Default 0
loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,
n_classes (int) – number of classes (0 in regression)
optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.
optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.
scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.
simple (bool, optional) – if simple, the model used is the same that the one illustrated in the paper, otherwise it is a more complicated one with the same idea
- handle_multivariate = True¶
- handle_future_covariates = True¶
- handle_categorical_variables = True¶
- handle_quantile_loss = True¶
- description = 'Can handle multivariate output \nCan handle future covariates\nCan handle categorical covariates\nCan handle Quantile loss function\n THE SIMPLE IMPLEMENTATION DOES NOT USE CATEGORICAL NOR FUTURE VARIABLES'¶
- __init__(past_steps, future_steps, past_channels, future_channels, embs, cat_emb_dim, kernel_size, sum_emb, out_channels, hidden_size, dropout_rate=0.1, activation='torch.nn.ReLU', kind='linear', use_bn=False, persistence_weight=0.0, loss_type='l1', quantiles=[], n_classes=0, optim=None, optim_config=None, scheduler_config=None, simple=False, **kwargs)[source]¶
Linear model from https://github.com/cure-lab/LTSF-Linear/blob/main/run_longExp.py
- Parameters:
past_steps (int) – number of past datapoints used
future_steps (int) – number of future lag to predict
past_channels (int) – number of numeric past variables, must be >0
future_channels (int) – number of future numeric variables
embs (List[int]) – list of the initial dimension of the categorical variables
cat_emb_dim (int) – final dimension of each categorical variable
kernel_size (int) – kernel dimension for initial moving average
sum_emb (bool) – if true the contribution of each embedding will be summed-up otherwise stacked
out_channels (int) – number of output channels
hidden_size (int) – hidden size of the lienar block
dropout_rate (float, optional) – dropout rate in Dropout layers. Default 0.1
activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU
kind (str, optional) – one among linear, dlinear (de-trending), nlinear (differential). Defaults to ‘linear’.
use_bn (bool, optional) – if true BN layers will be added and dropouts will be removed. Default False
quantiles (List[int], optional) – we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].
persistence_weight (float) – weight controlling the divergence from persistence model. Default 0
loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,
n_classes (int) – number of classes (0 in regression)
optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.
optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.
scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.
simple (bool, optional) – if simple, the model used is the same that the one illustrated in the paper, otherwise it is a more complicated one with the same idea
- forward(batch)[source]¶
Forlward method used during the training loop
- Parameters:
batch (dict) – the batch structure. The keys are: y : the target variable(s). This is always present x_num_past: the numerical past variables. This is always present x_num_future: the numerical future variables x_cat_past: the categorical past variables x_cat_future: the categorical future variables idx_target: index of target features in the past array
- Returns:
output of the mode;
- Return type:
torch.tensor