API Reference

Data Management

dsipts.Monash

Class for downloading datasets listed here https://forecastingdata.org/

dsipts.read_public_dataset

Returns the public dataset chosen.

Data Structure

dsipts.TimeSeries

Class for generating time series object.

dsipts.TimeSeries.load_signal

This is a crucial point in the data structure. We expect here to have a dataset with time as timestamp.

Models

class dsipts.RNN(hidden_RNN: int, num_layers_RNN: int, kind: str, kernel_size: int, activation: str = 'torch.nn.ReLU', remove_last=False, dropout_rate: float = 0.1, use_bn: bool = False, num_blocks: int = 4, bidirectional: bool = True, lstm_type: str = 'slstm', **kwargs)

Bases: Base

Initialize a recurrent model with an encoder-decoder structure.

Parameters:
  • hidden_RNN (int) – Hidden size of the RNN block.

  • num_layers_RNN (int) – Number of RNN layers.

  • kind (str) – Type of RNN to use, either ‘gru’ or ‘lstm’ or xlstm.

  • kernel_size (int) – Kernel size in the encoder convolutional block.

  • activation (str, optional) – Activation function from PyTorch. Default is ‘torch.nn.ReLU’.

  • remove_last (bool, optional) – If True, the model learns the difference with respect to the last seen point. Default is False.

  • dropout_rate (float, optional) – Dropout rate in Dropout layers. Default is 0.1.

  • use_bn (bool, optional) – If True, Batch Normalization layers will be added and Dropouts will be removed. Default is False.

  • num_blocks (int, optional) – Number of xLSTM blocks (only for xLSTM). Default is 4.

  • bidirectional (bool, optional) – If True, the RNN is bidirectional. Default is True.

  • lstm_type (str, optional) – Type of LSTM to use (only for xLSTM), either ‘slstm’ or ‘mlstm’. Default is ‘slstm’.

  • **kwargs – Additional keyword arguments.

Raises:

ValueError – If the specified kind is not ‘lstm’, ‘gru’, or ‘xlstm’.

class dsipts.LinearTS(kernel_size: int, hidden_size: int, dropout_rate: float = 0.1, activation: str = 'torch.nn.ReLU', kind: str = 'linear', use_bn: bool = False, simple: bool = False, **kwargs)

Bases: Base

Initialize the model with specified parameters. Linear model from https://github.com/cure-lab/LTSF-Linear/blob/main/run_longExp.py

Parameters:
  • kernel_size (int) – Kernel dimension for the initial moving average.

  • hidden_size (int) – Hidden size of the linear block.

  • dropout_rate (float, optional) – Dropout rate in Dropout layers. Default is 0.1.

  • activation (str, optional) – Activation function in PyTorch. Default is ‘torch.nn.ReLU’.

  • kind (str, optional) – Type of model, can be ‘linear’, ‘dlinear’ (de-trending), or ‘nlinear’ (differential). Defaults to ‘linear’.

  • use_bn (bool, optional) – If True, Batch Normalization layers will be added and Dropouts will be removed. Default is False.

  • simple (bool, optional) – If True, the model used is the same as illustrated in the paper; otherwise, a more complex model with the same idea is used. Default is False.

  • **kwargs – Additional keyword arguments for the parent class.

Raises:

ValueError – If an invalid activation function is provided.

class dsipts.Persistent(**kwargs)

Bases: Base

Simple persistent model aligned with all the other

class dsipts.D3VAE(scale=0.1, hidden_size=64, num_layers=2, dropout_rate=0.1, diff_steps=200, loss_type='kl', beta_end=0.01, beta_schedule='linear', channel_mult=2, mult=1, num_preprocess_blocks=1, num_preprocess_cells=3, num_channels_enc=16, arch_instance='res_mbconv', num_latent_per_group=6, num_channels_dec=16, groups_per_scale=2, num_postprocess_blocks=1, num_postprocess_cells=2, beta_start=0, freq='h', **kwargs)

Bases: Base

This is the basic model, each model implemented must overwrite the init method and the forward method. The inference step is optional, by default it uses the forward method but for recurrent network you should implement your own method

Parameters:
  • verbose (bool) – Flag to enable verbose logging.

  • past_steps (int) – Number of past time steps to consider.

  • future_steps (int) – Number of future time steps to predict.

  • past_channels (int) – Number of channels in the past input data.

  • future_channels (int) – Number of channels in the future input data.

  • out_channels (int) – Number of output channels.

  • embs_past (List[int]) – List of embedding dimensions for past data.

  • embs_fut (List[int]) – List of embedding dimensions for future data.

  • n_classes (int, optional) – Number of classes for classification. Defaults to 0.

  • persistence_weight (float, optional) – Weight for persistence in loss calculation. Defaults to 0.0.

  • loss_type (str, optional) – Type of loss function to use (‘l1’ or ‘mse’). Defaults to ‘l1’.

  • quantiles (List[int], optional) – List of quantiles for quantile loss. Defaults to an empty list.

  • reduction_mode (str, optional) – Mode for reduction for categorical embedding layer (‘mean’, ‘sum’, ‘none’). Defaults to ‘mean’.

  • use_classical_positional_encoder (bool, optional) – Flag to use classical positional encoding or using embedding layer also for the positions. Defaults to False.

  • emb_dim (int, optional) – Dimension of categorical embeddings. Defaults to 16.

  • optim (Union[str, None], optional) – Optimizer type. Defaults to None.

  • optim_config (dict, optional) – Configuration for the optimizer. Defaults to None.

  • scheduler_config (dict, optional) – Configuration for the learning rate scheduler. Defaults to None.

Raises:
  • AssertionError – If the number of quantiles is not equal to 3 when quantiles are provided.

  • AssertionError – If the number of output channels is not 1 for classification tasks.

class dsipts.DilatedConv(sum_layers: bool, hidden_RNN: int, num_layers_RNN: int, kind: str, kernel_size: int, activation: str = 'torch.nn.ReLU', remove_last=False, dropout_rate: float = 0.1, use_bn: bool = False, use_glu: bool = True, glu_percentage: float = 1.0, **kwargs)

Bases: Base

Custom encoder-decoder

Parameters:
  • sum_layers (bool) – Flag indicating whether to sum the layers.

  • hidden_RNN (int) – Number of hidden units in the RNN.

  • num_layers_RNN (int) – Number of layers in the RNN.

  • kind (str) – Type of RNN to use (e.g., ‘LSTM’, ‘GRU’).

  • kernel_size (int) – Size of the convolutional kernel.

  • activation (str, optional) – Activation function to use. Defaults to ‘torch.nn.ReLU’.

  • remove_last (bool, optional) – Flag to indicate whether to remove the last element in the sequence. Defaults to False.

  • dropout_rate (float, optional) – Dropout rate for regularization. Defaults to 0.1.

  • use_bn (bool, optional) – Flag to indicate whether to use batch normalization. Defaults to False.

  • use_glu (bool, optional) – Flag to indicate whether to use Gated Linear Units (GLU). Defaults to True.

  • glu_percentage (float, optional) – Percentage of GLU to apply. Defaults to 1.0.

  • **kwargs – Additional keyword arguments.

Returns:

None

class dsipts.TFT(d_model: int, num_layers_RNN: int, d_head: int, n_head: int, dropout_rate: float, **kwargs)

Bases: Base

Initializes the model for time series forecasting with attention mechanisms and recurrent neural networks.

This model is designed for direct forecasting, allowing for multi-output and multi-horizon predictions. It leverages attention mechanisms to enhance the selection of relevant past time steps and learn long-term dependencies. The architecture includes RNN enrichment, gating mechanisms to minimize the impact of irrelevant variables, and the ability to output prediction intervals through quantile regression.

Key features include: - Direct Model: Predicts all future steps at once. - Multi-Output Forecasting: Capable of predicting one or more variables simultaneously. - Multi-Horizon Forecasting: Predicts variables at multiple future time steps. - Attention-Based Mechanism: Enhances the selection of relevant past time steps and learns long-term dependencies. - RNN Enrichment: Utilizes LSTM for initial autoregressive approximation, which is refined by the rest of the network. - Gating Mechanisms: Reduces the contribution of irrelevant variables. - Prediction Intervals: Outputs percentiles (e.g., 10th, 50th, 90th) at each time step.

The model also facilitates interpretability by identifying: - Global importance of variables for both past and future. - Temporal patterns. - Significant events.

Parameters:
  • d_model (int) – General hidden dimension across the network, adjustable in sub-networks.

  • num_layers_RNN (int) – Number of layers in the recurrent neural network (LSTM).

  • d_head (int) – Dimension of each attention head.

  • n_head (int) – Number of attention heads.

  • dropout_rate (float) – Dropout rate applied uniformly across all dropout layers.

  • **kwargs – Additional keyword arguments for further customization.

class dsipts.Informer(d_model: int, hidden_size: int, n_layer_encoder: int, n_layer_decoder: int, mix: bool = True, activation: str = 'torch.nn.ReLU', remove_last=False, attn: str = 'prob', distil: bool = True, factor: int = 5, n_head: int = 1, dropout_rate: float = 0.1, **kwargs)

Bases: Base

Initialize the model with specified parameters. hhttps://github.com/zhouhaoyi/Informer2020/tree/main/models

Parameters:
  • d_model (int) – The dimensionality of the model.

  • hidden_size (int) – The size of the hidden layers.

  • n_layer_encoder (int) – The number of layers in the encoder.

  • n_layer_decoder (int) – The number of layers in the decoder.

  • mix (bool, optional) – Whether to use mixed attention. Defaults to True.

  • activation (str, optional) – The activation function to use. Defaults to ‘torch.nn.ReLU’.

  • remove_last (bool, optional) – Whether to remove the last layer. Defaults to False.

  • attn (str, optional) – The type of attention mechanism to use. Defaults to ‘prob’.

  • distil (bool, optional) – Whether to use distillation. Defaults to True.

  • factor (int, optional) – The factor for attention. Defaults to 5.

  • n_head (int, optional) – The number of attention heads. Defaults to 1.

  • dropout_rate (float, optional) – The dropout rate. Defaults to 0.1.

  • **kwargs – Additional keyword arguments.

Raises:

ValueError – If any of the parameters are invalid.

Notes

Ensure to set up split_params: shift: ${model_configs.future_steps} as it is required!!

class dsipts.VVA(past_steps: int, future_steps: int, past_channels: int, future_channels: int, embs: List[int], d_model: int, max_voc_size: int, token_split: int, num_layers: int, dropout_rate: float, n_heads: int, out_channels: int, persistence_weight: float = 0.0, loss_type: str = 'l1', quantiles: List[int] = [], optim: str | None = None, optim_config: dict = None, scheduler_config: dict = None, **kwargs)

Bases: Base

Custom encoder-decoder

Parameters:
  • past_steps (int) – number of past datapoints used

  • future_steps (int) – number of future lag to predict

  • past_channels (int) – number of numeric past variables, must be >0

  • future_channels (int) – number of future numeric variables

  • embs (List) – list of the initial dimension of the categorical variables

  • cat_emb_dim (int) – final dimension of each categorical variable

  • hidden_RNN (int) – hidden size of the RNN block

  • num_layers_RNN (int) – number of RNN layers

  • kind (str) – one among GRU or LSTM

  • kernel_size (int) – kernel size in the encoder convolutional block

  • sum_emb (bool) – if true the contribution of each embedding will be summed-up otherwise stacked

  • out_channels (int) – number of output channels

  • activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU

  • remove_last (bool, optional) – if True the model learns the difference respect to the last seen point

  • persistence_weight (float) – weight controlling the divergence from persistence model. Default 0

  • loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,

  • quantiles (List[int], optional) – we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].

  • dropout_rate (float, optional) – dropout rate in Dropout layers

  • use_bn (bool, optional) – if true BN layers will be added and dropouts will be removed

  • use_glu (bool,optional) – use GLU for feature selection. Defaults to True.

  • glu_percentage (float, optiona) – percentage of features to use. Defaults to 1.0.

  • n_classes (int) – number of classes (0 in regression)

  • optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.

  • optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.

  • scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.

class dsipts.VQVAEA(past_steps: int, future_steps: int, past_channels: int, future_channels: int, hidden_channels: int, embs: List[int], d_model: int, max_voc_size: int, num_layers: int, dropout_rate: float, commitment_cost: float, decay: float, n_heads: int, out_channels: int, epoch_vqvae: int, persistence_weight: float = 0.0, loss_type: str = 'l1', quantiles: List[int] = [], optim: str | None = None, optim_config: dict = None, scheduler_config: dict = None, **kwargs)

Bases: Base

Custom encoder-decoder

Parameters:
  • past_steps (int) – number of past datapoints used

  • future_steps (int) – number of future lag to predict

  • past_channels (int) – number of numeric past variables, must be >0

  • future_channels (int) – number of future numeric variables

  • embs (List) – list of the initial dimension of the categorical variables

  • cat_emb_dim (int) – final dimension of each categorical variable

  • hidden_RNN (int) – hidden size of the RNN block

  • num_layers_RNN (int) – number of RNN layers

  • kind (str) – one among GRU or LSTM

  • kernel_size (int) – kernel size in the encoder convolutional block

  • sum_emb (bool) – if true the contribution of each embedding will be summed-up otherwise stacked

  • out_channels (int) – number of output channels

  • activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU

  • remove_last (bool, optional) – if True the model learns the difference respect to the last seen point

  • persistence_weight (float) – weight controlling the divergence from persistence model. Default 0

  • loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,

  • quantiles (List[int], optional) – we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].

  • dropout_rate (float, optional) – dropout rate in Dropout layers

  • use_bn (bool, optional) – if true BN layers will be added and dropouts will be removed

  • use_glu (bool,optional) – use GLU for feature selection. Defaults to True.

  • glu_percentage (float, optiona) – percentage of features to use. Defaults to 1.0.

  • n_classes (int) – number of classes (0 in regression)

  • optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.

  • optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.

  • scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.

class dsipts.CrossFormer(https://openreview.net/forum?id=vSVLM2j9eie)

Bases: Base

Parameters:
  • d_model (int) – The dimensionality of the model.

  • hidden_size (int) – The size of the hidden layers.

  • n_head (int) – The number of attention heads.

  • seg_len (int) – The length of the segments.

  • n_layer_encoder (int) – The number of layers in the encoder.

  • win_size (int) – The size of the window for attention.

  • factor (int, optional) – see .crossformer.attn.TwoStageAttentionLayer. Defaults to 10.

  • dropout_rate (float, optional) – The dropout rate. Defaults to 0.1.

  • activation (str, optional) – The activation function to use. Defaults to ‘torch.nn.ReLU’.

  • **kwargs – Additional keyword arguments for the parent class.

Returns:

This method does not return a value.

Return type:

None

Raises:

ValueError – If the activation function is not recognized.

class dsipts.Autoformer(label_len: int, d_model: int, dropout_rate: float, kernel_size: int, activation: str = 'torch.nn.ReLU', factor: float = 0.5, n_head: int = 1, n_layer_encoder: int = 2, n_layer_decoder: int = 2, hidden_size: int = 1048, **kwargs)

Bases: Base

Autoformer from https://github.com/cure-lab/LTSF-Linear

Parameters:
  • label_len (int) – see the original implementation, seems like a warmup dimension (the decoder part will produce also some past predictions that are filter out at the end)

  • d_model (int) – embedding dimension of the attention layer

  • dropout_rate (float) – dropout raye

  • kernel_size (int) – kernel size

  • activation (str, optional) – _description_. Defaults to ‘torch.nn.ReLU’.

  • factor (int, optional) – parameter of .autoformer.layers.AutoCorrelation for find the top k. Defaults to 0.5.

  • n_head (int, optional) – number of heads. Defaults to 1.

  • n_layer_encoder (int, optional) – number of encoder layers. Defaults to 2.

  • n_layer_decoder (int, optional) – number of decoder layers. Defaults to 2.

  • hidden_size (int, optional) – output dimension of the transformer layer. Defaults to 1048.

class dsipts.PatchTST(d_model: int, patch_len: int, kernel_size: int, decomposition: bool = True, activation: str = 'torch.nn.ReLU', n_head: int = 1, n_layer: int = 2, stride: int = 8, remove_last: bool = False, hidden_size: int = 1048, dropout_rate: float = 0.1, **kwargs)

Bases: Base

Initializes the model with specified parameters.https://github.com/yuqinie98/PatchTST/blob/main/

Parameters:
  • d_model (int) – The dimensionality of the model.

  • patch_len (int) – The length of the patches.

  • kernel_size (int) – The size of the kernel for convolutional layers.

  • decomposition (bool, optional) – Whether to use decomposition. Defaults to True.

  • activation (str, optional) – The activation function to use. Defaults to ‘torch.nn.ReLU’.

  • n_head (int, optional) – The number of attention heads. Defaults to 1.

  • n_layer (int, optional) – The number of layers in the model. Defaults to 2.

  • stride (int, optional) – The stride for convolutional layers. Defaults to 8.

  • remove_last (bool, optional) – Whether to remove the last layer. Defaults to False.

  • hidden_size (int, optional) – The size of the hidden layers. Defaults to 1048.

  • dropout_rate (float, optional) – The dropout rate for regularization. Defaults to 0.1.

  • **kwargs – Additional keyword arguments.

Raises:

ValueError – If the activation function is not recognized.

class dsipts.Diffusion(d_model: int, out_channels: int, past_steps: int, future_steps: int, past_channels: int, future_channels: int, embs: List[int], learn_var: bool, cosine_alpha: bool, diffusion_steps: int, beta: float, gamma: float, n_layers_RNN: int, d_head: int, n_head: int, dropout_rate: float, activation: str, subnet: int, perc_subnet_learning_for_step: float, persistence_weight: float = 0.0, loss_type: str = 'l1', quantiles: List[float] = [], optim: str | None = None, optim_config: dict | None = None, scheduler_config: dict | None = None, **kwargs)

Bases: Base

Denoising Diffusion Probabilistic Model

Parameters:
  • d_model (int)

  • out_channels (int) – number of target variables

  • past_steps (int) – size of past window

  • future_steps (int) – size of future window to be predicted

  • past_channels (int) – number of variables available for the past context

  • future_channels (int) – number of variables known in the future, available for forecasting

  • embs (list[int]) – categorical variables dimensions for embeddings

  • learn_var (bool) – Flag to make the model train the posterior variance (if True) or use the variance of posterior distribution

  • cosine_alpha (bool) – Flag for the generation of alphas and betas

  • diffusion_steps (int) – number of noising steps for the initial sample

  • beta (float) – starting variable to generate the diffusion perturbations. Ignored if cosine_alpha == True

  • gamma (float) – trade_off variable to balance loss over noise prediction and NegativeLikelihood/KL_Divergence.

  • n_layers_RNN (int) – param for subnet

  • d_head (int) – param for subnet

  • n_head (int) – param for subnet

  • dropout_rate (float) – param for subnet

  • activation (str) – param for subnet

  • subnet (int) – =1 for attention subnet, =2 for linear subnet. Others can be added(wait for Black Friday for discounts)

  • perc_subnet_learning_for_step (float) – percentage to choose how many subnet has to be trained for every batch. Decrease this value if the loss blows up.

  • persistence_weight (float, optional) – Defaults to 0.0.

  • loss_type (str, optional) – Defaults to ‘l1’.

  • quantiles (List[float], optional) – Only [] accepted. Defaults to [].

  • optim (Union[str,None], optional) – Defaults to None.

  • optim_config (Union[dict,None], optional) – Defaults to None.

  • scheduler_config (Union[dict,None], optional) – Defaults to None.

class dsipts.DilatedConvED(sum_layers: bool, hidden_RNN: int, num_layers_RNN: int, kind: str, kernel_size: int, dropout_rate: float = 0.1, use_bn: bool = False, use_cumsum: bool = True, use_bilinear: bool = False, activation: str = 'torch.nn.ReLU', **kwargs)

Bases: Base

Initialize the model with specified parameters.

Parameters:
  • sum_layers (bool) – Flag indicating whether to sum layers in the encoder/decoder blocks.

  • hidden_RNN (int) – Number of hidden units in the RNN.

  • num_layers_RNN (int) – Number of layers in the RNN.

  • kind (str) – Type of RNN to use (‘lstm’ or ‘gru’).

  • kernel_size (int) – Size of the convolutional kernel.

  • dropout_rate (float, optional) – Dropout rate for regularization. Defaults to 0.1.

  • use_bn (bool, optional) – Flag to use batch normalization. Defaults to False.

  • use_cumsum (bool, optional) – Flag to use cumulative sum. Defaults to True.

  • use_bilinear (bool, optional) – Flag to use bilinear layers. Defaults to False.

  • activation (str, optional) – Activation function to use. Defaults to ‘torch.nn.ReLU’.

  • **kwargs – Additional keyword arguments.

Raises:

ValueError – If the specified activation function is not recognized or if the kind is not ‘lstm’ or ‘gru’.

class dsipts.TIDE(hidden_size: int, d_model: int, n_add_enc: int, n_add_dec: int, dropout_rate: float, activation: str = '', **kwargs)

Bases: Base

Initializes the model with specified parameters for a neural network architecture. Long-term Forecasting with TiDE: Time-series Dense Encoder https://arxiv.org/abs/2304.08424

Parameters:
  • hidden_size (int) – The size of the hidden layers.

  • d_model (int) – The dimensionality of the model.

  • n_add_enc (int) – The number of additional encoder layers.

  • n_add_dec (int) – The number of additional decoder layers.

  • dropout_rate (float) – The dropout rate to be applied in the layers.

  • activation (str, optional) – The activation function to be used. Defaults to an empty string.

  • **kwargs – Additional keyword arguments passed to the parent class.

class dsipts.ITransformer(hidden_size: int, d_model: int, n_head: int, n_layer_decoder: int, use_norm: bool, class_strategy: str = 'projection', dropout_rate: float = 0.1, activation: str = '', **kwargs)

Bases: Base

Initialize the ITransformer model for time series forecasting.

This class implements the Inverted Transformer architecture as described in the paper “ITRANSFORMER: INVERTED TRANSFORMERS ARE EFFECTIVE FOR TIME SERIES FORECASTING” (https://arxiv.org/pdf/2310.06625).

Parameters:
  • hidden_size (int) – The first embedding size of the model (‘r’ in the paper).

  • d_model (int) – The second embedding size (r^{tilda} in the model). Should be smaller than hidden_size.

  • n_head (int) – The number of attention heads.

  • n_layer_decoder (int) – The number of layers in the decoder.

  • use_norm (bool) – Flag to indicate whether to use normalization.

  • class_strategy (str, optional) – The strategy for classification, can be ‘projection’, ‘average’, or ‘cls_token’. Defaults to ‘projection’.

  • dropout_rate (float, optional) – The dropout rate for regularization. Defaults to 0.1.

  • activation (str, optional) – The activation function to be used. Defaults to ‘’.

  • **kwargs – Additional keyword arguments.

Raises:

ValueError – If the activation function is not recognized.

class dsipts.TimeXER(patch_len: int, d_model: int, n_head: int, d_ff: int = 512, dropout_rate: float = 0.1, n_layer_decoder: int = 1, activation: str = '', **kwargs)

Bases: Base

Initialize the model with specified parameters. https://github.com/thuml/Time-Series-Library/blob/main/models/TimeMixer.py

Parameters:
  • patch_len (int) – Length of the patches.

  • d_model (int) – Dimension of the model.

  • n_head (int) – Number of attention heads.

  • d_ff (int, optional) – Dimension of the feedforward network. Defaults to 512.

  • dropout_rate (float, optional) – Dropout rate for regularization. Defaults to 0.1.

  • n_layer_decoder (int, optional) – Number of layers in the decoder. Defaults to 1.

  • activation (str, optional) – Activation function to use. Defaults to ‘’.

  • **kwargs – Additional keyword arguments passed to the superclass.

Raises:

ValueError – If an invalid activation function is provided.

class dsipts.TTM(model_path: str, past_steps: int, future_steps: int, freq_prefix_tuning: bool, freq: str, prefer_l1_loss: bool, prefer_longer_context: bool, loss_type: str, num_input_channels, prediction_channel_indices, exogenous_channel_indices, decoder_mode, fcm_context_length, fcm_use_mixer, fcm_mix_layers, fcm_prepend_past, enable_forecast_channel_mixing, out_channels: int, embs: List[int], remove_last=False, optim: str | None = None, optim_config: dict = None, scheduler_config: dict = None, verbose=False, use_quantiles=False, persistence_weight: float = 0.0, quantiles: List[int] = [], **kwargs)

Bases: Base

TODO and FIX for future and past categorical variables

Parameters:
  • model_path (str) – _description_

  • past_steps (int) – _description_

  • future_steps (int) – _description_

  • freq_prefix_tuning (bool) – _description_

  • freq (str) – _description_

  • prefer_l1_loss (bool) – _description_

  • loss_type (str) – _description_

  • num_input_channels (_type_) – _description_

  • prediction_channel_indices (_type_) – _description_

  • exogenous_channel_indices (_type_) – _description_

  • decoder_mode (_type_) – _description_

  • fcm_context_length (_type_) – _description_

  • fcm_use_mixer (_type_) – _description_

  • fcm_mix_layers (_type_) – _description_

  • fcm_prepend_past (_type_) – _description_

  • enable_forecast_channel_mixing (_type_) – _description_

  • out_channels (int) – _description_

  • embs (List[int]) – _description_

  • remove_last (bool, optional) – _description_. Defaults to False.

  • optim (Union[str,None], optional) – _description_. Defaults to None.

  • optim_config (dict, optional) – _description_. Defaults to None.

  • scheduler_config (dict, optional) – _description_. Defaults to None.

  • verbose (bool, optional) – _description_. Defaults to False.

  • use_quantiles (bool, optional) – _description_. Defaults to False.

  • persistence_weight (float, optional) – _description_. Defaults to 0.0.

  • quantiles (List[int], optional) – _description_. Defaults to [].

class dsipts.Samformer(hidden_size: int, use_revin: bool, activation: str = '', **kwargs)

Bases: Base

Initialize the model with specified parameters. Samformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention. https://arxiv.org/pdf/2402.10198

Parameters:
  • hidden_size (int) – The size of the hidden layer.

  • use_revin (bool) – Flag indicating whether to use RevIN.

  • activation (str, optional) – The activation function to use. Defaults to ‘’.

  • **kwargs – Additional keyword arguments passed to the parent class.

Raises:

ValueError – If the activation function is not recognized.

class dsipts.Duet(factor: int, d_model: int, n_head: int, n_layer: int, CI: bool, d_ff: int, noisy_gating: bool, num_experts: int, kernel_size: int, hidden_size: int, k: int, dropout_rate: float = 0.1, activation: str = '', **kwargs)

Bases: Base

Initializes the model with the specified parameters. https://github.com/decisionintelligence/DUET

Parameters:
  • factor (int) – The factor for attention scaling. NOT USED but in the original implementation

  • d_model (int) – The dimensionality of the model.

  • n_head (int) – The number of attention heads.

  • n_layer (int) – The number of layers in the encoder.

  • CI (bool) – Perform channel independent operations.

  • d_ff (int) – The dimensionality of the feedforward layer.

  • noisy_gating (bool) – Flag to indicate if noisy gating is used.

  • num_experts (int) – The number of experts in the mixture of experts.

  • kernel_size (int) – The size of the convolutional kernel.

  • hidden_size (int) – The size of the hidden layer.

  • k (int) – The number of clusters for the linear extractor.

  • dropout_rate (float, optional) – The dropout rate. Defaults to 0.1.

  • activation (str, optional) – The activation function to use. Defaults to ‘’.

  • **kwargs – Additional keyword arguments.

Raises:

ValueError – If the activation function is not recognized.

class dsipts.Base(verbose: bool, past_steps: int, future_steps: int, past_channels: int, future_channels: int, out_channels: int, embs_past: List[int], embs_fut: List[int], n_classes: int = 0, persistence_weight: float = 0.0, loss_type: str = 'l1', quantiles: List[int] = [], reduction_mode: str = 'mean', use_classical_positional_encoder: bool = False, emb_dim: int = 16, optim: str | None = None, optim_config: dict = None, scheduler_config: dict = None)

Bases: LightningModule

This is the basic model, each model implemented must overwrite the init method and the forward method. The inference step is optional, by default it uses the forward method but for recurrent network you should implement your own method

Parameters:
  • verbose (bool) – Flag to enable verbose logging.

  • past_steps (int) – Number of past time steps to consider.

  • future_steps (int) – Number of future time steps to predict.

  • past_channels (int) – Number of channels in the past input data.

  • future_channels (int) – Number of channels in the future input data.

  • out_channels (int) – Number of output channels.

  • embs_past (List[int]) – List of embedding dimensions for past data.

  • embs_fut (List[int]) – List of embedding dimensions for future data.

  • n_classes (int, optional) – Number of classes for classification. Defaults to 0.

  • persistence_weight (float, optional) – Weight for persistence in loss calculation. Defaults to 0.0.

  • loss_type (str, optional) – Type of loss function to use (‘l1’ or ‘mse’). Defaults to ‘l1’.

  • quantiles (List[int], optional) – List of quantiles for quantile loss. Defaults to an empty list.

  • reduction_mode (str, optional) – Mode for reduction for categorical embedding layer (‘mean’, ‘sum’, ‘none’). Defaults to ‘mean’.

  • use_classical_positional_encoder (bool, optional) – Flag to use classical positional encoding or using embedding layer also for the positions. Defaults to False.

  • emb_dim (int, optional) – Dimension of categorical embeddings. Defaults to 16.

  • optim (Union[str, None], optional) – Optimizer type. Defaults to None.

  • optim_config (dict, optional) – Configuration for the optimizer. Defaults to None.

  • scheduler_config (dict, optional) – Configuration for the learning rate scheduler. Defaults to None.

Raises:
  • AssertionError – If the number of quantiles is not equal to 3 when quantiles are provided.

  • AssertionError – If the number of output channels is not 1 for classification tasks.

class dsipts.Simple(hidden_size: int, dropout_rate: float = 0.1, activation: str = 'torch.nn.ReLU', **kwargs)

Bases: Base

This is the basic model, each model implemented must overwrite the init method and the forward method. The inference step is optional, by default it uses the forward method but for recurrent network you should implement your own method

Parameters:
  • verbose (bool) – Flag to enable verbose logging.

  • past_steps (int) – Number of past time steps to consider.

  • future_steps (int) – Number of future time steps to predict.

  • past_channels (int) – Number of channels in the past input data.

  • future_channels (int) – Number of channels in the future input data.

  • out_channels (int) – Number of output channels.

  • embs_past (List[int]) – List of embedding dimensions for past data.

  • embs_fut (List[int]) – List of embedding dimensions for future data.

  • n_classes (int, optional) – Number of classes for classification. Defaults to 0.

  • persistence_weight (float, optional) – Weight for persistence in loss calculation. Defaults to 0.0.

  • loss_type (str, optional) – Type of loss function to use (‘l1’ or ‘mse’). Defaults to ‘l1’.

  • quantiles (List[int], optional) – List of quantiles for quantile loss. Defaults to an empty list.

  • reduction_mode (str, optional) – Mode for reduction for categorical embedding layer (‘mean’, ‘sum’, ‘none’). Defaults to ‘mean’.

  • use_classical_positional_encoder (bool, optional) – Flag to use classical positional encoding or using embedding layer also for the positions. Defaults to False.

  • emb_dim (int, optional) – Dimension of categorical embeddings. Defaults to 16.

  • optim (Union[str, None], optional) – Optimizer type. Defaults to None.

  • optim_config (dict, optional) – Configuration for the optimizer. Defaults to None.

  • scheduler_config (dict, optional) – Configuration for the learning rate scheduler. Defaults to None.

Raises:
  • AssertionError – If the number of quantiles is not equal to 3 when quantiles are provided.

  • AssertionError – If the number of output channels is not 1 for classification tasks.