dsipts.VQVAEA¶
- class dsipts.VQVAEA(past_steps: int, future_steps: int, past_channels: int, future_channels: int, hidden_channels: int, embs: List[int], d_model: int, max_voc_size: int, num_layers: int, dropout_rate: float, commitment_cost: float, decay: float, n_heads: int, out_channels: int, epoch_vqvae: int, persistence_weight: float = 0.0, loss_type: str = 'l1', quantiles: List[int] = [], optim: str | None = None, optim_config: dict = None, scheduler_config: dict = None, **kwargs)¶
Custom encoder-decoder
- Parameters:
past_steps (int) – number of past datapoints used
future_steps (int) – number of future lag to predict
past_channels (int) – number of numeric past variables, must be >0
future_channels (int) – number of future numeric variables
embs (List) – list of the initial dimension of the categorical variables
cat_emb_dim (int) – final dimension of each categorical variable
hidden_RNN (int) – hidden size of the RNN block
num_layers_RNN (int) – number of RNN layers
kind (str) – one among GRU or LSTM
kernel_size (int) – kernel size in the encoder convolutional block
sum_emb (bool) – if true the contribution of each embedding will be summed-up otherwise stacked
out_channels (int) – number of output channels
activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU
remove_last (bool, optional) – if True the model learns the difference respect to the last seen point
persistence_weight (float) – weight controlling the divergence from persistence model. Default 0
loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,
quantiles (List[int], optional) – we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].
dropout_rate (float, optional) – dropout rate in Dropout layers
use_bn (bool, optional) – if true BN layers will be added and dropouts will be removed
use_glu (bool,optional) – use GLU for feature selection. Defaults to True.
glu_percentage (float, optiona) – percentage of features to use. Defaults to 1.0.
n_classes (int) – number of classes (0 in regression)
optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.
optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.
scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.
- __init__(past_steps: int, future_steps: int, past_channels: int, future_channels: int, hidden_channels: int, embs: List[int], d_model: int, max_voc_size: int, num_layers: int, dropout_rate: float, commitment_cost: float, decay: float, n_heads: int, out_channels: int, epoch_vqvae: int, persistence_weight: float = 0.0, loss_type: str = 'l1', quantiles: List[int] = [], optim: str | None = None, optim_config: dict = None, scheduler_config: dict = None, **kwargs) None¶
Custom encoder-decoder
- Parameters:
past_steps (int) – number of past datapoints used
future_steps (int) – number of future lag to predict
past_channels (int) – number of numeric past variables, must be >0
future_channels (int) – number of future numeric variables
embs (List) – list of the initial dimension of the categorical variables
cat_emb_dim (int) – final dimension of each categorical variable
hidden_RNN (int) – hidden size of the RNN block
num_layers_RNN (int) – number of RNN layers
kind (str) – one among GRU or LSTM
kernel_size (int) – kernel size in the encoder convolutional block
sum_emb (bool) – if true the contribution of each embedding will be summed-up otherwise stacked
out_channels (int) – number of output channels
activation (str, optional) – activation fuction function pytorch. Default torch.nn.ReLU
remove_last (bool, optional) – if True the model learns the difference respect to the last seen point
persistence_weight (float) – weight controlling the divergence from persistence model. Default 0
loss_type (str, optional) – this model uses custom losses or l1 or mse. Custom losses can be linear_penalization or exponential_penalization. Default l1,
quantiles (List[int], optional) – we can use quantile loss il len(quantiles) = 0 (usually 0.1,0.5, 0.9) or L1loss in case len(quantiles)==0. Defaults to [].
dropout_rate (float, optional) – dropout rate in Dropout layers
use_bn (bool, optional) – if true BN layers will be added and dropouts will be removed
use_glu (bool,optional) – use GLU for feature selection. Defaults to True.
glu_percentage (float, optiona) – percentage of features to use. Defaults to 1.0.
n_classes (int) – number of classes (0 in regression)
optim (str, optional) – if not None it expects a pytorch optim method. Defaults to None that is mapped to Adam.
optim_config (dict, optional) – configuration for Adam optimizer. Defaults to None.
scheduler_config (dict, optional) – configuration for stepLR scheduler. Defaults to None.
Methods
__init__(past_steps, future_steps, ...[, ...])Custom encoder-decoder
add_module(name, module)Add a child module to the current module.
all_gather(data[, group, sync_grads])Gather tensors or collections of tensors from multiple processes.
apply(fn)Apply
fnrecursively to every submodule (as returned by.children()) as well as self.backward(loss, *args, **kwargs)Called to perform backward on the loss returned in
training_step().bfloat16()Casts all floating point parameters and buffers to
bfloat16datatype.buffers([recurse])Return an iterator over module buffers.
children()Return an iterator over immediate children modules.
clip_gradients(optimizer[, ...])Handles gradient clipping internally.
compile(*args, **kwargs)Compile this Module's forward using
torch.compile().compute_loss(batch, y_hat)custom loss calculation
configure_callbacks()Configure model-specific callbacks.
configure_gradient_clipping(optimizer[, ...])Perform gradient clipping for the optimizer parameters.
configure_model()Hook to create modules in a strategy and precision aware context.
configure_optimizers()Each model has optim_config and scheduler_config
configure_sharded_model()Deprecated.
cpu()See
torch.nn.Module.cpu().cuda([device])Moves all model parameters and buffers to the GPU.
double()See
torch.nn.Module.double().eval()Set the module in evaluation mode.
extra_repr()Return the extra representation of the module.
float()See
torch.nn.Module.float().forward(batch)Forlward method used during the training loop
freeze()Freeze all params for inference.
generate(idx, max_new_tokens[, temperature, ...])Take a conditioning sequence of indices idx (LongTensor of shape (b,t)) and complete the sequence max_new_tokens times, feeding the predictions back into the model each time.
get_buffer(target)Return the buffer given by
targetif it exists, otherwise throw an error.get_extra_state()Return any extra state to include in the module's state_dict.
get_parameter(target)Return the parameter given by
targetif it exists, otherwise throw an error.get_submodule(target)Return the submodule given by
targetif it exists, otherwise throw an error.gpt(tokens)half()See
torch.nn.Module.half().inference(batch)Usually it is ok to return the output of the forward method but sometimes not (e.g. RNN).
ipu([device])Move all model parameters and buffers to the IPU.
load_from_checkpoint(checkpoint_path[, ...])Primary way of loading a model from a checkpoint.
load_state_dict(state_dict[, strict, assign])Copy parameters and buffers from
state_dictinto this module and its descendants.log(name, value[, prog_bar, logger, ...])Log a key, value pair.
log_dict(dictionary[, prog_bar, logger, ...])Log a dictionary of values at once.
lr_scheduler_step(scheduler, metric)Override this method to adjust the default way the
Trainercalls each scheduler.lr_schedulers()Returns the learning rate scheduler(s) that are being used during training.
manual_backward(loss, *args, **kwargs)Call this directly from your
training_step()when doing optimizations manually.modules()Return an iterator over all modules in the network.
mtia([device])Move all model parameters and buffers to the MTIA.
named_buffers([prefix, recurse, ...])Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children()Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules([memo, prefix, remove_duplicate])Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters([prefix, recurse, ...])Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
on_after_backward()Called after
loss.backward()and before optimizers are stepped.on_after_batch_transfer(batch, dataloader_idx)Override to alter or apply batch augmentations to your batch after it is transferred to the device.
on_before_backward(loss)Called before
loss.backward().on_before_batch_transfer(batch, dataloader_idx)Override to alter or apply batch augmentations to your batch before it is transferred to the device.
on_before_optimizer_step(optimizer)Called before
optimizer.step().on_before_zero_grad(optimizer)Called after
training_step()and beforeoptimizer.zero_grad().on_fit_end()Called at the very end of fit.
on_fit_start()Called at the very beginning of fit.
on_load_checkpoint(checkpoint)Called by Lightning to restore your model.
on_predict_batch_end(outputs, batch, batch_idx)Called in the predict loop after the batch.
on_predict_batch_start(batch, batch_idx[, ...])Called in the predict loop before anything happens for that batch.
on_predict_end()Called at the end of predicting.
on_predict_epoch_end()Called at the end of predicting.
on_predict_epoch_start()Called at the beginning of predicting.
on_predict_model_eval()Called when the predict loop starts.
on_predict_start()Called at the beginning of predicting.
on_save_checkpoint(checkpoint)Called by Lightning when saving a checkpoint to give you a chance to store anything else you might want to save.
on_test_batch_end(outputs, batch, batch_idx)Called in the test loop after the batch.
on_test_batch_start(batch, batch_idx[, ...])Called in the test loop before anything happens for that batch.
on_test_end()Called at the end of testing.
on_test_epoch_end()Called in the test loop at the very end of the epoch.
on_test_epoch_start()Called in the test loop at the very beginning of the epoch.
on_test_model_eval()Called when the test loop starts.
on_test_model_train()Called when the test loop ends.
on_test_start()Called at the beginning of testing.
on_train_batch_end(outputs, batch, batch_idx)Called in the training loop after the batch.
on_train_batch_start(batch, batch_idx)Called in the training loop before anything happens for that batch.
on_train_end()Called at the end of training before logger experiment is closed.
on_train_epoch_end()pythotrch lightening stuff
on_train_epoch_start()Called in the training loop at the very beginning of the epoch.
on_train_start()Called at the beginning of training after sanity check.
on_validation_batch_end(outputs, batch, ...)Called in the validation loop after the batch.
on_validation_batch_start(batch, batch_idx)Called in the validation loop before anything happens for that batch.
on_validation_end()Called at the end of validation.
on_validation_epoch_end()pythotrch lightening stuff
on_validation_epoch_start()Called in the validation loop at the very beginning of the epoch.
on_validation_model_eval()Called when the validation loop starts.
on_validation_model_train()Called when the validation loop ends.
on_validation_model_zero_grad()Called by the training loop to release gradients before entering the validation loop.
on_validation_start()Called at the beginning of validation.
optimizer_step(epoch, batch_idx, optimizer)Override this method to adjust the default way the
Trainercalls the optimizer.optimizer_zero_grad(epoch, batch_idx, optimizer)Override this method to change the default behaviour of
optimizer.zero_grad().optimizers([use_pl_optimizer])Returns the optimizer(s) that are being used during training.
parameters([recurse])Return an iterator over module parameters.
predict_dataloader()An iterable or collection of iterables specifying prediction samples.
predict_step(*args, **kwargs)Step function called during
predict().prepare_data()Use this to download and prepare data.
print(*args, **kwargs)Prints only from process 0.
register_backward_hook(hook)Register a backward hook on the module.
register_buffer(name, tensor[, persistent])Add a buffer to the module.
register_forward_hook(hook, *[, prepend, ...])Register a forward hook on the module.
register_forward_pre_hook(hook, *[, ...])Register a forward pre-hook on the module.
register_full_backward_hook(hook[, prepend])Register a backward hook on the module.
register_full_backward_pre_hook(hook[, prepend])Register a backward pre-hook on the module.
register_load_state_dict_post_hook(hook)Register a post-hook to be run after module's
load_state_dict()is called.register_load_state_dict_pre_hook(hook)Register a pre-hook to be run before module's
load_state_dict()is called.register_module(name, module)Alias for
add_module().register_parameter(name, param)Add a parameter to the module.
register_state_dict_post_hook(hook)Register a post-hook for the
state_dict()method.register_state_dict_pre_hook(hook)Register a pre-hook for the
state_dict()method.requires_grad_([requires_grad])Change if autograd should record operations on parameters in this module.
save_hyperparameters(*args[, ignore, frame, ...])Save arguments to
hparamsattribute.set_extra_state(state)Set extra state contained in the loaded state_dict.
set_submodule(target, module[, strict])Set the submodule given by
targetif it exists, otherwise throw an error.setup(stage)Called at the beginning of fit (train + validate), validate, test, or predict.
share_memory()See
torch.Tensor.share_memory_().state_dict(*args[, destination, prefix, ...])Return a dictionary containing references to the whole state of the module.
teardown(stage)Called at the end of fit (train + validate), validate, test, or predict.
test_dataloader()An iterable or collection of iterables specifying test samples.
test_step(*args, **kwargs)Operates on a single batch of data from the test set.
to(*args, **kwargs)See
torch.nn.Module.to().to_empty(*, device[, recurse])Move the parameters and buffers to the specified device without copying storage.
to_onnx([file_path, input_sample])Saves the model in ONNX format.
to_torchscript([file_path, method, ...])By default compiles the whole model to a
ScriptModule.toggle_optimizer(optimizer)Makes sure only the gradients of the current optimizer's parameters are calculated in the training step to prevent dangling gradients in multiple-optimizer setup.
toggled_optimizer(optimizer)Makes sure only the gradients of the current optimizer's parameters are calculated in the training step to prevent dangling gradients in multiple-optimizer setup.
train([mode])Set the module in training mode.
train_dataloader()An iterable or collection of iterables specifying training samples.
training_step(batch, batch_idx)pythotrch lightening stuff
transfer_batch_to_device(batch, device, ...)Override this hook if your
DataLoaderreturns tensors wrapped in a custom data structure.type(dst_type)See
torch.nn.Module.type().unfreeze()Unfreeze all parameters for training.
untoggle_optimizer(optimizer)Resets the state of required gradients that were toggled with
toggle_optimizer().val_dataloader()An iterable or collection of iterables specifying validation samples.
validation_step(batch, batch_idx)pythotrch lightening stuff
xpu([device])Move all model parameters and buffers to the XPU.
zero_grad([set_to_none])Reset gradients of all model parameters.
Attributes
CHECKPOINT_HYPER_PARAMS_KEYCHECKPOINT_HYPER_PARAMS_NAMECHECKPOINT_HYPER_PARAMS_TYPET_destinationautomatic_optimizationIf set to
Falseyou are responsible for calling.backward(),.step(),.zero_grad().call_super_initcurrent_epochThe current epoch in the
Trainer, or 0 if not attached.devicedevice_meshStrategies like
ModelParallelStrategywill create a device mesh that can be accessed in theconfigure_model()hook to parallelize the LightningModule.dtypedump_patchesexample_input_arrayThe example input array is a specification of what the module can consume in the
forward()method.fabricglobal_rankThe index of the current process across all nodes and devices.
global_stepTotal training batches seen across all epochs.
hparamsThe collection of hyperparameters saved with
save_hyperparameters().hparams_initialThe collection of hyperparameters saved with
save_hyperparameters().local_rankThe index of the current process within a single node.
loggerReference to the logger object in the Trainer.
loggersReference to the list of loggers in the Trainer.
on_gpuReturns
Trueif this model is currently located on a GPU.strict_loadingDetermines how Lightning loads this model using .load_state_dict(..., strict=model.strict_loading).
trainertraining