dsipts.models.d3vae.model module

Authors:

Li,Yan (liyan22021121@gmail.com)

class dsipts.models.d3vae.model.diffusion_generate(target_dim, embedding_dimension, prediction_length, sequence_length, scale, hidden_size, num_layers, dropout_rate, diff_steps, loss_type, beta_end, beta_schedule, channel_mult, mult, num_preprocess_blocks, num_preprocess_cells, num_channels_enc, arch_instance, num_latent_per_group, num_channels_dec, groups_per_scale, num_postprocess_blocks, num_postprocess_cells)[source]

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(target_dim, embedding_dimension, prediction_length, sequence_length, scale, hidden_size, num_layers, dropout_rate, diff_steps, loss_type, beta_end, beta_schedule, channel_mult, mult, num_preprocess_blocks, num_preprocess_cells, num_channels_enc, arch_instance, num_latent_per_group, num_channels_dec, groups_per_scale, num_postprocess_blocks, num_postprocess_cells)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(past_time_feat, future_time_feat, t)[source]

Output the generative results and related variables.

class dsipts.models.d3vae.model.denoise_net(target_dim, embedding_dimension, prediction_length, sequence_length, scale, hidden_size, num_layers, dropout_rate, diff_steps, loss_type, beta_end, beta_schedule, channel_mult, mult, num_preprocess_blocks, num_preprocess_cells, num_channels_enc, arch_instance, num_latent_per_group, num_channels_dec, groups_per_scale, num_postprocess_blocks, num_postprocess_cells, beta_start, input_dim, freq, embs)[source]

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(target_dim, embedding_dimension, prediction_length, sequence_length, scale, hidden_size, num_layers, dropout_rate, diff_steps, loss_type, beta_end, beta_schedule, channel_mult, mult, num_preprocess_blocks, num_preprocess_cells, num_channels_enc, arch_instance, num_latent_per_group, num_channels_dec, groups_per_scale, num_postprocess_blocks, num_postprocess_cells, beta_start, input_dim, freq, embs)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

extract(a, t, x_shape)[source]

extract the t-th element from a

forward(past_time_feat, mark, future_time_feat, t)[source]
Params:
past_time_feat: Tensor

the input time series.

mark: Tensor

the time feature mark.

future_time_feat: Tensor

the target time series.

t: Tensor

the diffusion step.

Tensor

The gauaaian distribution of the generative results.

y_noisy: Tensor

The diffused target.

total_c: Float

Total correlation of all the latent variables in the BVAE, used for disentangling.

all_z: List

All the latent variables of bvae.

loss: Float

The loss of score matching.

Return type:

output

class dsipts.models.d3vae.model.pred_net(target_dim, embedding_dimension, prediction_length, sequence_length, scale, hidden_size, num_layers, dropout_rate, diff_steps, loss_type, beta_end, beta_schedule, channel_mult, mult, num_preprocess_blocks, num_preprocess_cells, num_channels_enc, arch_instance, num_latent_per_group, num_channels_dec, groups_per_scale, num_postprocess_blocks, num_postprocess_cells, beta_start, input_dim, freq, embs)[source]

Bases: denoise_net

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, mark)[source]

generate the prediction by the trained model. :returns: The noisy generative results

out: Denoised results, remove the noise from y through score matching. tc: Total correlations, indicator of extent of disentangling.

Return type:

y

class dsipts.models.d3vae.model.Discriminator(neg_slope=0.2, latent_dim=10, hidden_units=1000, out_units=2)[source]

Bases: Module

Discriminator proposed in [1]. :param neg_slope: Hyperparameter for the Leaky ReLu :type neg_slope: float :param latent_dim: Dimensionality of latent variables. :type latent_dim: int :param hidden_units: Number of hidden units in the MLP :type hidden_units: int :param Model Architecture: :param ————: :param - 6 layer multi-layer perceptron: :param each with 1000 hidden units: :param - Leaky ReLu activations: :param - Output 2 logits: :param References: [1] Kim, Hyunjik, and Andriy Mnih. “Disentangling by factorising.”

arXiv preprint arXiv:1802.05983 (2018).

__init__(neg_slope=0.2, latent_dim=10, hidden_units=1000, out_units=2)[source]

Discriminator proposed in [1]. :param neg_slope: Hyperparameter for the Leaky ReLu :type neg_slope: float :param latent_dim: Dimensionality of latent variables. :type latent_dim: int :param hidden_units: Number of hidden units in the MLP :type hidden_units: int :param Model Architecture: :param ————: :param - 6 layer multi-layer perceptron: :param each with 1000 hidden units: :param - Leaky ReLu activations: :param - Output 2 logits: :param References: [1] Kim, Hyunjik, and Andriy Mnih. “Disentangling by factorising.”

arXiv preprint arXiv:1802.05983 (2018).

forward(z)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.