dsipts.models.d3vae package

Submodules

dsipts.models.d3vae.diffusion_process module

Authors:

Li,Yan (liyan22021121@gmail.com)

class dsipts.models.d3vae.diffusion_process.GaussianDiffusion(bvae, input_size, beta_start=0, beta_end=0.1, diff_steps=100, loss_type='l2', betas=None, scale=0.1, beta_schedule='linear')

Bases: Module

Params:

bave: The bidirectional vae model. beta_start: The start value of the beta schedule. beta_end: The end value of the beta schedule. beta_schedule: the kind of the beta schedule, here are fixed to linear, you can adjust it as needed. diff_steps: The maximum diffusion steps. scale: scale parameters for the target time series.

log_prob(x_input, y_target, time)
p_losses(x_start, y_target, t, noise=None, noise1=None)

Put the diffused input into the BVAE to generate the output. Params

param x_start:

[B, T, *]

param y_target:

[B1, T1, *]

param t:

[B,]

return y_noisy:

diffused target.

return total_c:

the total correlations of latent variables in BVAE.

return all_z:

all latent variables of BVAE.

q_sample(x_start, t, noise=None)
Diffuse the initial input.
param x_start:

[B, T, *]

return:

[B, T, *]

q_sample_target(y_target, t, noise=None)
Diffuse the target.
param y_target:

[B1, T1, *]

return:

(tensor) [B1, T1, *]

dsipts.models.d3vae.diffusion_process.default(val, d)
dsipts.models.d3vae.diffusion_process.extract(a, t, x_shape)
dsipts.models.d3vae.diffusion_process.get_beta_schedule(beta_schedule, beta_start, beta_end, num_diffusion_timesteps)
dsipts.models.d3vae.diffusion_process.noise_like(shape, device, repeat=False)

dsipts.models.d3vae.embedding module

Authors:

Li,Yan (liyan22021121@gmail.com)

class dsipts.models.d3vae.embedding.DataEmbedding(c_in, d_model, embs, dropout=0.1)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, x_mark)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.embedding.PositionalEmbedding(d_model, max_len=5000)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.embedding.TemporalEmbedding(d_model, freq='h')

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.embedding.TokenEmbedding(c_in, d_model)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

dsipts.models.d3vae.encoder module

Description:

The model architecture of the bidirectional vae. Note: Part of the code are borrowed from ‘https://github.com/NVlabs/NVAE

Authors:

Li,Yan (liyan22021121@gmail.com)

class dsipts.models.d3vae.encoder.Cell(Cin, Cout, cell_type, arch, use_se)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(s)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.encoder.Encoder(channel_mult, mult, prediction_length, num_preprocess_blocks, num_preprocess_cells, num_channels_enc, arch_instance, num_latent_per_group, num_channels_dec, groups_per_scale, num_postprocess_blocks, num_postprocess_cells, embedding_dimension, hidden_size, target_dim, sequence_length, num_layers, dropout_rate)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

decoder_output(logits)
forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_decoder_tower(mult)
init_encoder_tower(mult)
init_post_process(mult)
init_pre_process(mult)
init_sampler(mult)
class dsipts.models.d3vae.encoder.Normal(mu, log_sigma, temp=1.0)

Bases: object

kl(normal_dist)
log_p(samples)
sample()
sample_given_eps(eps)
class dsipts.models.d3vae.encoder.NormalDecoder(param)

Bases: object

log_prob(samples)
sample()
dsipts.models.d3vae.encoder.log_density_gaussian(sample, mu, logvar)

Calculates log density of a Gaussian. :param x: Value at which to compute the density. :type x: torch.Tensor or np.ndarray or float :param mu: Mean. :type mu: torch.Tensor or np.ndarray or float :param logvar: Log variance. :type logvar: torch.Tensor or np.ndarray or float

dsipts.models.d3vae.encoder.sample_normal_jit(mu, sigma)
dsipts.models.d3vae.encoder.soft_clamp5(x: Tensor)

dsipts.models.d3vae.model module

Authors:

Li,Yan (liyan22021121@gmail.com)

class dsipts.models.d3vae.model.Discriminator(neg_slope=0.2, latent_dim=10, hidden_units=1000, out_units=2)

Bases: Module

Discriminator proposed in [1]. :param neg_slope: Hyperparameter for the Leaky ReLu :type neg_slope: float :param latent_dim: Dimensionality of latent variables. :type latent_dim: int :param hidden_units: Number of hidden units in the MLP :type hidden_units: int :param Model Architecture: :param ————: :param - 6 layer multi-layer perceptron: :param each with 1000 hidden units: :param - Leaky ReLu activations: :param - Output 2 logits: :param References: [1] Kim, Hyunjik, and Andriy Mnih. “Disentangling by factorising.”

arXiv preprint arXiv:1802.05983 (2018).

forward(z)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.model.denoise_net(target_dim, embedding_dimension, prediction_length, sequence_length, scale, hidden_size, num_layers, dropout_rate, diff_steps, loss_type, beta_end, beta_schedule, channel_mult, mult, num_preprocess_blocks, num_preprocess_cells, num_channels_enc, arch_instance, num_latent_per_group, num_channels_dec, groups_per_scale, num_postprocess_blocks, num_postprocess_cells, beta_start, input_dim, freq, embs)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

extract(a, t, x_shape)

extract the t-th element from a

forward(past_time_feat, mark, future_time_feat, t)
Params:
past_time_feat: Tensor

the input time series.

mark: Tensor

the time feature mark.

future_time_feat: Tensor

the target time series.

t: Tensor

the diffusion step.

Tensor

The gauaaian distribution of the generative results.

y_noisy: Tensor

The diffused target.

total_c: Float

Total correlation of all the latent variables in the BVAE, used for disentangling.

all_z: List

All the latent variables of bvae.

loss: Float

The loss of score matching.

Return type:

output

class dsipts.models.d3vae.model.diffusion_generate(target_dim, embedding_dimension, prediction_length, sequence_length, scale, hidden_size, num_layers, dropout_rate, diff_steps, loss_type, beta_end, beta_schedule, channel_mult, mult, num_preprocess_blocks, num_preprocess_cells, num_channels_enc, arch_instance, num_latent_per_group, num_channels_dec, groups_per_scale, num_postprocess_blocks, num_postprocess_cells)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(past_time_feat, future_time_feat, t)

Output the generative results and related variables.

class dsipts.models.d3vae.model.pred_net(target_dim, embedding_dimension, prediction_length, sequence_length, scale, hidden_size, num_layers, dropout_rate, diff_steps, loss_type, beta_end, beta_schedule, channel_mult, mult, num_preprocess_blocks, num_preprocess_cells, num_channels_enc, arch_instance, num_latent_per_group, num_channels_dec, groups_per_scale, num_postprocess_blocks, num_postprocess_cells, beta_start, input_dim, freq, embs)

Bases: denoise_net

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, mark)

generate the prediction by the trained model. :returns: The noisy generative results

out: Denoised results, remove the noise from y through score matching. tc: Total correlations, indicator of extent of disentangling.

Return type:

y

dsipts.models.d3vae.neural_operations module

Authors:

Li,Yan (liyan22021121@gmail.com)

class dsipts.models.d3vae.neural_operations.BNELUConv(C_in, C_out, kernel_size, stride=1, padding=0, dilation=1)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.BNSwishConv(C_in, C_out, kernel_size, stride=1, padding=0, dilation=1)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)
Parameters:

x (torch.Tensor) – of size (B, C_in, H, W)

class dsipts.models.d3vae.neural_operations.Conv2D(C_in, C_out, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=False, data_init=False, weight_norm=True)

Bases: Conv2d

Parameters:

use_shared (bool) – Use weights for this layer or not?

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

normalize_weight()

applies weight normalization

class dsipts.models.d3vae.neural_operations.ConvBNSwish(Cin, Cout, k=3, stride=1, groups=1, dilation=1)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.DecCombinerCell(Cin1, Cin2, Cout, cell_type)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x1, x2)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.ELUConv(C_in, C_out, kernel_size, stride=1, padding=0, dilation=1)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.EncCombinerCell(Cin1, Cin2, Cout, cell_type)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x1, x2)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.FactorizedReduce(C_in, C_out)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.Identity

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.InvertedResidual(Cin, Cout, stride, ex, dil, k, g)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.SE(Cin, Cout)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.Swish

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.SwishFN(*args, **kwargs)

Bases: Function

backward(grad_output)

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

forward(i)

Define the forward of the custom autograd Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass
  • It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

  • See combining-forward-context for more details

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See extending-autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

class dsipts.models.d3vae.neural_operations.SyncBatchNorm(*args, **kwargs)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.SyncBatchNormSwish(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None)

Bases: _BatchNorm

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(input)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.neural_operations.UpSample

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

dsipts.models.d3vae.neural_operations.act(t)
dsipts.models.d3vae.neural_operations.get_batchnorm(*args, **kwargs)
dsipts.models.d3vae.neural_operations.get_skip_connection(C, stride, channel_mult)
dsipts.models.d3vae.neural_operations.logit(t)
dsipts.models.d3vae.neural_operations.norm(t, dim)
dsipts.models.d3vae.neural_operations.normalize_weight_jit(log_weight_norm, weight)

dsipts.models.d3vae.resnet module

Authors:

Li,Yan (liyan22021121@gmail.com)

class dsipts.models.d3vae.resnet.ConvMeanPool(input_dim, output_dim, kernel_size)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(input)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.resnet.MeanPoolConv(input_dim, output_dim, kernel_size)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(input)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.resnet.MyConvo2d(input_dim, output_dim, kernel_size, stride=1, bias=True)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(input)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.resnet.Res12_Quadratic(inchan, dim, hw, normalize=False, AF=None)

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x_in)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.resnet.ResidualBlock(input_dim, output_dim, kernel_size, hw, resample=None, normalize=False, AF=ELU(alpha=1.0))

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(input)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.resnet.Square

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(in_vect)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.d3vae.resnet.Swish

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(in_vect)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

dsipts.models.d3vae.resnet.weights_init(m)

dsipts.models.d3vae.utils module

dsipts.models.d3vae.utils.average_tensor(t, is_distributed)
dsipts.models.d3vae.utils.get_arch_cells(arch_type)
dsipts.models.d3vae.utils.get_input_size(dataset)
dsipts.models.d3vae.utils.get_stride_for_cell_type(cell_type)
dsipts.models.d3vae.utils.groups_per_scale(num_scales, num_groups_per_scale, is_adaptive, divider=2, minimum_groups=1)

Module contents