dsipts.models.tft package

Submodules

dsipts.models.tft.sub_nn module

class dsipts.models.tft.sub_nn.GLU(d_model: int)

Bases: Module

Gated Linear Unit

Auxiliary subnet for sigmoid element-wise multiplication

Parameters:

d_model (int) – dimension of operations

forward(x: Tensor) Tensor

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.tft.sub_nn.GRN(d_model: int, dropout_rate: float)

Bases: Module

Gated Residual Network

Auxiliary subnet for gating residual connections

Parameters:
  • d_model (int)

  • dropout_rate (float)

forward(x: Tensor, using_norm: bool = True) Tensor

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.tft.sub_nn.InterpretableMultiHead(d_model, d_head, n_head)

Bases: Module

Interpretable MultiHead Attention

Similar to canonical MultiHead Attention with Query-Keys-Value structure Particularities are: - Only one common “Value”-Linear layer for all heads - output of all heads are summed together and then rescaled over the number of heads The final output tensor is re-embedded in the initial dimension

Parameters:
  • d_model (int) – starting and ending dimension of the net

  • d_head (int) – hidden dimension of all heads

  • n_head (int) – number of heads

forward(query: Tensor, key: Tensor, value: Tensor) Tensor

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.tft.sub_nn.LSTM_Model(num_var: int, d_model: int, pred_step: int, num_layers: int, dropout: float)

Bases: Module

LSTM from […, d_model] to […, predicted_step, num_of_vars]

Parameters:
  • num_var (int) – number of variables encoded in the input tensor

  • d_model (int) – encoding dimension of the tensor

  • pred_step (int) – step to be predicted by LSTM

  • num_layers (int) – number of layers of LSTM

  • dropout (float)

forward(x)

LSTM process over the x tensor and reshaping according to pred_step and num_var to be predicted

Parameters:

x (torch.Tensor) – input tensor

Returns:

tensor resized to [B, pred_step, num_var]

Return type:

torch.Tensor

class dsipts.models.tft.sub_nn.ResidualConnection(d_model, dropout_rate)

Bases: Module

Residual Connection of res_conn with GLU(x)

Auxiliary subnet for residual connections

Parameters:
  • d_model (int)

  • dropout_rate (float)

forward(x: Tensor, res_conn: Tensor, using_norm: bool = True) Tensor

Res Cionnection using normalizing computatiion on ‘x’ and strict ‘res_conn’

Parameters:
  • x (torch.Tensor) – GLU(dropout(x))

  • res_conn (torch.Tensor) – tensor summed to x before normalization

  • using_norm (bool, optional) – _description_. Defaults to True.

Return type:

torch.Tensor

class dsipts.models.tft.sub_nn.embedding_cat_variables(seq_len: int, lag: int, d_model: int, emb_dims: list, device)

Bases: Module

Class for embedding categorical variables, adding 3 positional variables during forward

Parameters:
  • seq_len (int) – length of the sequence (sum of past and future steps)

  • lag (int) – number of future step to be predicted

  • hiden_size (int) – dimension of all variables after they are embedded

  • emb_dims (list) – size of the dictionary for embedding. One dimension for each categorical variable

  • device

forward(x: Tensor | int, device: device) Tensor

All components of x are concatenated with 3 new variables for data augmentation, in the order: - pos_seq: assign at each step its time-position - pos_fut: assign at each step its future position. 0 if it is a past step - is_fut: explicit for each step if it is a future(1) or past one(0)

Parameters:

x (torch.Tensor) – [bs, seq_len, num_vars]

Returns:

[bs, seq_len, num_vars+3, n_embd]

Return type:

torch.Tensor

get_cat_n_embd(cat_vars)
get_is_fut(bs)
get_pos_fut(bs)
get_pos_seq(bs)

Module contents