ConvNeXt

The ConvNeXt model is based on the “A ConvNet for the 2020s” paper.

Architecture overview

This architecture compiles tricks from transformer-based vision models to improve a pure convolutional model.

https://github.com/frgfm/Holocron/releases/download/v0.2.1/convnext.png

The key takeaways from the paper are the following:

  • update the stem convolution to act like a patchify layer of transformers

  • increase block kernel size to 7

  • switch to depth-wise convolutions

  • reduce the amount of activations and normalization layers

Model builders

The following model builders can be used to instantiate a ConvNeXt model, with or without pre-trained weights. All the model builders internally rely on the holocron.models.classification.convnext.ConvNeXt base class. Please refer to the source code for more details about this class.

convnext_atto([pretrained, checkpoint, progress])

ConvNeXt-Atto variant of Ross Wightman inspired by "A ConvNet for the 2020s"

convnext_femto([pretrained, checkpoint, ...])

ConvNeXt-Femto variant of Ross Wightman inspired by "A ConvNet for the 2020s"

convnext_pico([pretrained, checkpoint, progress])

ConvNeXt-Pico variant of Ross Wightman inspired by "A ConvNet for the 2020s"

convnext_nano([pretrained, checkpoint, progress])

ConvNeXt-Nano variant of Ross Wightman inspired by "A ConvNet for the 2020s"

convnext_tiny([pretrained, checkpoint, progress])

ConvNeXt-T from "A ConvNet for the 2020s"

convnext_small([pretrained, checkpoint, ...])

ConvNeXt-S from "A ConvNet for the 2020s"

convnext_base([pretrained, checkpoint, progress])

ConvNeXt-B from "A ConvNet for the 2020s"

convnext_large([pretrained, checkpoint, ...])

ConvNeXt-L from "A ConvNet for the 2020s"

convnext_xl([pretrained, checkpoint, progress])

ConvNeXt-XL from "A ConvNet for the 2020s"