dsipts.models.vva.minigpt module¶
- class dsipts.models.vva.minigpt.NewGELU(*args, **kwargs)[source]¶
Bases:
ModuleImplementation of the GELU activation function currently in Google BERT repo (identical to OpenAI GPT). Reference: Gaussian Error Linear Units (GELU) paper: https://arxiv.org/abs/1606.08415
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class dsipts.models.vva.minigpt.CausalSelfAttention(n_embd, n_head, attn_pdrop, resid_pdrop, block_size)[source]¶
Bases:
ModuleA vanilla multi-head masked self-attention layer with a projection at the end. It is possible to use torch.nn.MultiheadAttention here but I am including an explicit implementation here to show that there is nothing too scary here.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(n_embd, n_head, attn_pdrop, resid_pdrop, block_size)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class dsipts.models.vva.minigpt.Block(n_embd, resid_pdrop, n_head, attn_pdrop, block_size)[source]¶
Bases:
Modulean unassuming Transformer block
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- __init__(n_embd, resid_pdrop, n_head, attn_pdrop, block_size)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.