MLPNet

class MLPNet(input_size: int, hidden_size: List[int], lr: float, loss: torch.nn.modules.module.Module, optimizer_params: Optional[dict])[source]

Bases: etna.models.base.DeepBaseNet

MLP model.

Init MLP model.

Parameters
  • input_size (int) – size of the input feature space: target plus extra features

  • hidden_size (List[int]) – list of sizes of the hidden states

  • lr (float) – learning rate

  • loss (torch.nn.Module) – loss function

  • optimizer_params (Optional[dict]) – parameters for optimizer for Adam optimizer (api reference torch.optim.Adam)

Return type

None

Methods

configure_optimizers()

Optimizer configuration.

forward(batch)

Forward pass.

make_samples(df, encoder_length, decoder_length)

Make samples from segment DataFrame.

step(batch, *args, **kwargs)

Step for loss computation for training or validation.

Attributes

configure_optimizers()[source]

Optimizer configuration.

forward(batch: etna.models.nn.mlp.MLPBatch)[source]

Forward pass.

Parameters

batch (etna.models.nn.mlp.MLPBatch) – batch of data

Returns

forecast

make_samples(df: pandas.core.frame.DataFrame, encoder_length: int, decoder_length: int) Iterable[dict][source]

Make samples from segment DataFrame.

Parameters
  • df (pandas.core.frame.DataFrame) –

  • encoder_length (int) –

  • decoder_length (int) –

Return type

Iterable[dict]

step(batch: etna.models.nn.mlp.MLPBatch, *args, **kwargs)[source]

Step for loss computation for training or validation.

Parameters

batch (etna.models.nn.mlp.MLPBatch) – batch of data

Returns

loss, true_target, prediction_target