dsipts.models.D3VAE module

dsipts.models.D3VAE.copy_parameters(net_source, net_dest, strict=True)[source]

Copies parameters from one network to another. :param net_source: Input network. :param net_dest: Output network. :param strict: whether to strictly enforce that the keys

in state_dict match the keys returned by this module’s state_dict() function. Default: True

class dsipts.models.D3VAE.D3VAE(past_channels, past_steps, future_steps, future_channels, embs, out_channels, quantiles, embedding_dimension=32, scale=0.1, hidden_size=64, num_layers=2, dropout_rate=0.1, diff_steps=200, loss_type='kl', beta_end=0.01, beta_schedule='linear', channel_mult=2, mult=1, num_preprocess_blocks=1, num_preprocess_cells=3, num_channels_enc=16, arch_instance='res_mbconv', num_latent_per_group=6, num_channels_dec=16, groups_per_scale=2, num_postprocess_blocks=1, num_postprocess_cells=2, beta_start=0, freq='h', optim=None, optim_config=None, scheduler_config=None, **kwargs)[source]

Bases: Base

This is the basic model, each model implemented must overwrite the init method and the forward method. The inference step is optional, by default it uses the forward method but for recurrent network you should implement your own method

__init__(past_channels, past_steps, future_steps, future_channels, embs, out_channels, quantiles, embedding_dimension=32, scale=0.1, hidden_size=64, num_layers=2, dropout_rate=0.1, diff_steps=200, loss_type='kl', beta_end=0.01, beta_schedule='linear', channel_mult=2, mult=1, num_preprocess_blocks=1, num_preprocess_cells=3, num_channels_enc=16, arch_instance='res_mbconv', num_latent_per_group=6, num_channels_dec=16, groups_per_scale=2, num_postprocess_blocks=1, num_postprocess_cells=2, beta_start=0, freq='h', optim=None, optim_config=None, scheduler_config=None, **kwargs)[source]

This is the basic model, each model implemented must overwrite the init method and the forward method. The inference step is optional, by default it uses the forward method but for recurrent network you should implement your own method

forward(batch)[source]

Forlward method used during the training loop

Parameters:

batch (dict) – the batch structure. The keys are: y : the target variable(s). This is always present x_num_past: the numerical past variables. This is always present x_num_future: the numerical future variables x_cat_past: the categorical past variables x_cat_future: the categorical future variables idx_target: index of target features in the past array

Returns:

output of the mode;

Return type:

torch.tensor

inference(batch)[source]

Care here, we need to implement it because for predicting the N-step it will use the prediction at step N-1. TODO fix if because I did not implement the know continuous variable presence here

Parameters:

batch (dict) – batch of the dataloader

Returns:

result

Return type:

torch.tensor