PyTorch Dataset/Loader
Torch Dataset for Time Series
source
TimeSeriesLoader
*TimeSeriesLoader DataLoader. Source code.
Small change to PyTorch’s Data loader. Combines a dataset and a sampler, and provides an iterable over the given dataset.
The class ~torch.utils.data.DataLoader
supports both map-style and
iterable-style datasets with single- or multi-process loading,
customizing loading order and optional automatic batching (collation)
and memory pinning.
Parameters:
batch_size
: (int, optional): how many samples per
batch to load (default: 1).
shuffle
: (bool, optional): set to
True
to have the data reshuffled at every epoch (default:
False
).
sampler
: (Sampler or Iterable, optional): defines the
strategy to draw samples from the dataset.
Can be any Iterable
with __len__
implemented. If specified, shuffle
must not be
specified.
*
source
BaseTimeSeriesDataset
*An abstract class representing a :class:Dataset
.
All datasets that represent a map from keys to data samples should
subclass it. All subclasses should overwrite :meth:__getitem__
,
supporting fetching a data sample for a given key. Subclasses could also
optionally overwrite :meth:__len__
, which is expected to return the
size of the dataset by many :class:~torch.utils.data.Sampler
implementations and the default options of
:class:~torch.utils.data.DataLoader
. Subclasses could also optionally
implement :meth:__getitems__
, for speedup batched samples loading.
This method accepts list of indices of samples of batch and returns list
of samples.
.. note:: :class:~torch.utils.data.DataLoader
by default constructs an
index sampler that yields integral indices. To make it work with a
map-style dataset with non-integral indices/keys, a custom sampler must
be provided.*
source
LocalFilesTimeSeriesDataset
*An abstract class representing a :class:Dataset
.
All datasets that represent a map from keys to data samples should
subclass it. All subclasses should overwrite :meth:__getitem__
,
supporting fetching a data sample for a given key. Subclasses could also
optionally overwrite :meth:__len__
, which is expected to return the
size of the dataset by many :class:~torch.utils.data.Sampler
implementations and the default options of
:class:~torch.utils.data.DataLoader
. Subclasses could also optionally
implement :meth:__getitems__
, for speedup batched samples loading.
This method accepts list of indices of samples of batch and returns list
of samples.
.. note:: :class:~torch.utils.data.DataLoader
by default constructs an
index sampler that yields integral indices. To make it work with a
map-style dataset with non-integral indices/keys, a custom sampler must
be provided.*
source
TimeSeriesDataset
*An abstract class representing a :class:Dataset
.
All datasets that represent a map from keys to data samples should
subclass it. All subclasses should overwrite :meth:__getitem__
,
supporting fetching a data sample for a given key. Subclasses could also
optionally overwrite :meth:__len__
, which is expected to return the
size of the dataset by many :class:~torch.utils.data.Sampler
implementations and the default options of
:class:~torch.utils.data.DataLoader
. Subclasses could also optionally
implement :meth:__getitems__
, for speedup batched samples loading.
This method accepts list of indices of samples of batch and returns list
of samples.
.. note:: :class:~torch.utils.data.DataLoader
by default constructs an
index sampler that yields integral indices. To make it work with a
map-style dataset with non-integral indices/keys, a custom sampler must
be provided.*
source
TimeSeriesDataModule
*A DataModule standardizes the training, val, test splits, data preparation and transforms. The main advantage is consistent data splits, data preparation and transforms across models.
Example::