Shortcuts

MAE

class mmpretrain.models.selfsup.MAE(backbone, neck=None, head=None, target_generator=None, pretrained=None, data_preprocessor=None, init_cfg=None)[source]

MAE.

Implementation of Masked Autoencoders Are Scalable Vision Learners ` <https://arxiv.org/abs/2111.06377>`_.

loss(inputs, data_samples, **kwargs)[source]

The forward function in training.

Parameters:
  • inputs (torch.Tensor) – The input images.

  • data_samples (List[DataSample]) – All elements required during the forward function.

Returns:

A dictionary of loss components.

Return type:

Dict[str, torch.Tensor]