dsipts.models.autoformer.layers module

class dsipts.models.autoformer.layers.AutoCorrelation(mask_flag=True, factor=1, scale=None, attention_dropout=0.1, output_attention=False)[source]

Bases: Module

AutoCorrelation Mechanism with the following two phases: (1) period-based dependencies discovery (2) time delay aggregation This block can replace the self-attention family mechanism seamlessly.

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(mask_flag=True, factor=1, scale=None, attention_dropout=0.1, output_attention=False)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

time_delay_agg_training(values, corr)[source]

SpeedUp version of Autocorrelation (a batch-normalization style design) This is for the training phase.

time_delay_agg_inference(values, corr)[source]

SpeedUp version of Autocorrelation (a batch-normalization style design) This is for the inference phase.

time_delay_agg_full(values, corr)[source]

Standard version of Autocorrelation

forward(queries, keys, values, attn_mask)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.autoformer.layers.AutoCorrelationLayer(correlation, d_model, n_heads, d_keys=None, d_values=None)[source]

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(correlation, d_model, n_heads, d_keys=None, d_values=None)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(queries, keys, values, attn_mask)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.autoformer.layers.my_Layernorm(channels)[source]

Bases: Module

Special designed layernorm for the seasonal part

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(channels)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.autoformer.layers.moving_avg(kernel_size, stride)[source]

Bases: Module

Moving average block to highlight the trend of time series

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(kernel_size, stride)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.autoformer.layers.series_decomp(kernel_size)[source]

Bases: Module

Series decomposition block

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(kernel_size)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.autoformer.layers.EncoderLayer(attention, d_model, d_ff=None, moving_avg=25, dropout=0.1, activation='relu')[source]

Bases: Module

Autoformer encoder layer with the progressive decomposition architecture

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(attention, d_model, d_ff=None, moving_avg=25, dropout=0.1, activation='relu')[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, attn_mask=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.autoformer.layers.Encoder(attn_layers, conv_layers=None, norm_layer=None)[source]

Bases: Module

Autoformer encoder

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(attn_layers, conv_layers=None, norm_layer=None)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, attn_mask=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.autoformer.layers.DecoderLayer(self_attention, cross_attention, d_model, c_out, d_ff=None, moving_avg=25, dropout=0.1, activation='relu')[source]

Bases: Module

Autoformer decoder layer with the progressive decomposition architecture

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(self_attention, cross_attention, d_model, c_out, d_ff=None, moving_avg=25, dropout=0.1, activation='relu')[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, cross, x_mask=None, cross_mask=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.autoformer.layers.Decoder(layers, norm_layer=None, projection=None)[source]

Bases: Module

Autoformer encoder

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(layers, norm_layer=None, projection=None)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, cross, x_mask=None, cross_mask=None, trend=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class dsipts.models.autoformer.layers.PositionalEmbedding(d_model, max_len=5000)[source]

Bases: Module

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__init__(d_model, max_len=5000)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.