NN Modules
1. MLP
Multi-Layer Perceptron
source
MLP
*Multi-Layer Perceptron Class
Parameters:
in_features
: int, dimension of input.
out_features
: int, dimension of output.
activation
: str,
activation function to use.
hidden_size
: int, dimension of hidden
layers.
num_layers
: int, number of hidden layers.
dropout
:
float, dropout rate.
*
2. Temporal Convolutions
For long time in deep learning, sequence modelling was synonymous with recurrent networks, yet several papers have shown that simple convolutional architectures can outperform canonical recurrent networks like LSTMs by demonstrating longer effective memory.
References
-van den Oord, A., Dieleman, S., Zen, H., Simonyan,
K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. W., &
Kavukcuoglu, K. (2016). Wavenet: A generative model for raw audio.
Computing Research Repository, abs/1609.03499. URL:
http://arxiv.org/abs/1609.03499.
arXiv:1609.03499.
-Shaojie Bai,
Zico Kolter, Vladlen Koltun. (2018). An Empirical Evaluation of Generic
Convolutional and Recurrent Networks for Sequence Modeling. Computing
Research Repository, abs/1803.01271. URL:
https://arxiv.org/abs/1803.01271.
Chomp1d
*Chomp1d
Receives x
input of dim [N,C,T], and trims it so that only ‘time
available’ information is used. Used by one dimensional causal
convolutions CausalConv1d
.
Parameters:
horizon
: int, length of outsample values to
skip.*
CausalConv1d
*Causal Convolution 1d
Receives x
input of dim [N,C_in,T], and computes a causal
convolution in the time dimension. Skipping the H steps of the forecast
horizon, through its dilation. Consider a batch of one element, the
dilated convolution operation on the time step is defined:
where is the dilation factor, is the kernel size, is the index of the considered past observation. The dilation effectively applies a filter with skip connections. If one recovers a normal convolution.
Parameters:
in_channels
: int, dimension of x
input’s initial
channels.
out_channels
: int, dimension of x
outputs’s
channels.
activation
: str, identifying activations from PyTorch
activations. select from ‘ReLU’,‘Softplus’,‘Tanh’,‘SELU’,
‘LeakyReLU’,‘PReLU’,‘Sigmoid’.
padding
: int, number of zero
padding used to the left.
kernel_size
: int, convolution’s kernel
size.
dilation
: int, dilation skip connections.
Returns:
x
: tensor, torch tensor of dim [N,C_out,T]
activation(conv1d(inputs, kernel) + bias).
*
TemporalConvolutionEncoder
*Temporal Convolution Encoder
Receives x
input of dim [N,T,C_in], permutes it to [N,C_in,T]
applies a deep stack of exponentially dilated causal convolutions. The
exponentially increasing dilations of the convolutions allow for the
creation of weighted averages of exponentially large long-term memory.
Parameters:
in_channels
: int, dimension of x
input’s initial
channels.
out_channels
: int, dimension of x
outputs’s
channels.
kernel_size
: int, size of the convolving kernel.
dilations
: int list, controls the temporal spacing between the kernel
points.
activation
: str, identifying activations from PyTorch
activations. select from ‘ReLU’,‘Softplus’,‘Tanh’,‘SELU’,
‘LeakyReLU’,‘PReLU’,‘Sigmoid’.
Returns:
x
: tensor, torch tensor of dim [N,T,C_out].
*
3. Transformers
References
- Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai
Zhang, Jianxin Li, Hui Xiong, Wancai Zhang. “Informer: Beyond Efficient
Transformer for Long Sequence Time-Series
Forecasting”
- Haixu Wu, Jiehui
Xu, Jianmin Wang, Mingsheng Long.
TransEncoder
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
TransEncoderLayer
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
TransDecoder
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
TransDecoderLayer
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
AttentionLayer
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
DataEmbedding
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
TemporalEmbedding
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
FixedEmbedding
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
TimeFeatureEmbedding
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
source
TokenEmbedding
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
PositionalEmbedding
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent
class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
source
SeriesDecomp
Series decomposition block
source
MovingAvg
Moving average block to highlight the trend of time series
RevIN
RevIN (Reversible-Instance-Normalization)