holocron.models#

The models subpackage contains definitions of models for addressing different tasks, including: image classification, pixelwise semantic segmentation, object detection, instance segmentation, person keypoint detection and video classification.

Classification#

Classification models expect a 4D image tensor as an input (N x C x H x W) and returns a 2D output (N x K). The output represents the classification scores for each output classes.

import holocron.models as models
darknet19 = models.darknet19(num_classes=10)

Supported architectures#

Available checkpoints#

Here is the list of available checkpoints:

Checkpoint

Acc@1

Acc@5

Params

Size (MB)

CSPDarknet53_Checkpoint.IMAGENETTE

94.50%

99.64%

26.6M

101.8

CSPDarknet53_Mish_Checkpoint.IMAGENETTE

94.65%

99.69%

26.6M

101.8

ConvNeXt_Atto_Checkpoint.IMAGENETTE

87.59%

98.32%

3.4M

12.9

Darknet19_Checkpoint.IMAGENETTE

93.86%

99.36%

19.8M

75.7

Darknet53_Checkpoint.IMAGENETTE

94.17%

99.57%

40.6M

155.1

MobileOne_S0_Checkpoint.IMAGENETTE

88.08%

98.83%

4.3M

16.9

MobileOne_S1_Checkpoint.IMAGENETTE

91.26%

99.18%

3.6M

13.9

MobileOne_S2_Checkpoint.IMAGENETTE

91.31%

99.21%

5.9M

22.8

MobileOne_S3_Checkpoint.IMAGENETTE

91.06%

99.31%

8.1M

31.5

ReXNet1_0x_Checkpoint.IMAGENET1K

77.86%

93.87%

4.8M

13.7

ReXNet1_0x_Checkpoint.IMAGENETTE

94.39%

99.62%

3.5M

13.7

ReXNet1_3x_Checkpoint.IMAGENET1K

79.50%

94.68%

7.6M

13.7

ReXNet1_3x_Checkpoint.IMAGENETTE

94.88%

99.39%

5.9M

22.8

ReXNet1_5x_Checkpoint.IMAGENET1K

80.31%

95.17%

9.7M

13.7

ReXNet1_5x_Checkpoint.IMAGENETTE

94.47%

99.62%

7.8M

30.2

ReXNet2_0x_Checkpoint.IMAGENET1K

80.31%

95.17%

16.4M

13.7

ReXNet2_0x_Checkpoint.IMAGENETTE

95.24%

99.57%

13.8M

53.1

ReXNet2_2x_Checkpoint.IMAGENETTE

95.44%

99.46%

16.7M

64.1

RepVGG_A0_Checkpoint.IMAGENETTE

92.92%

99.46%

24.7M

94.6

RepVGG_A1_Checkpoint.IMAGENETTE

93.78%

99.18%

30.1M

115.1

RepVGG_A2_Checkpoint.IMAGENETTE

93.63%

99.39%

48.6M

185.8

RepVGG_B0_Checkpoint.IMAGENETTE

92.69%

99.21%

31.8M

121.8

RepVGG_B1_Checkpoint.IMAGENETTE

93.96%

99.39%

100.8M

385.1

RepVGG_B2_Checkpoint.IMAGENETTE

94.14%

99.57%

157.5M

601.2

Res2Net50_26w_4s_Checkpoint.IMAGENETTE

93.94%

99.41%

23.7M

90.6

ResNeXt50_32x4d_Checkpoint.IMAGENETTE

94.55%

99.49%

23.0M

88.1

ResNet18_Checkpoint.IMAGENETTE

93.61%

99.46%

11.2M

42.7

ResNet34_Checkpoint.IMAGENETTE

93.81%

99.49%

21.3M

81.3

ResNet50D_Checkpoint.IMAGENETTE

94.65%

99.52%

23.5M

90.1

ResNet50_Checkpoint.IMAGENETTE

93.78%

99.54%

23.5M

90

SKNet50_Checkpoint.IMAGENETTE

94.37%

99.54%

35.2M

134.7

Object Detection#

Object detection models expect a 4D image tensor as an input (N x C x H x W) and returns a list of dictionaries. Each dictionary has 3 keys: box coordinates, classification probability, classification label.

import holocron.models as models
yolov2 = models.yolov2(num_classes=10)

YOLO#

holocron.models.detection.yolov1(pretrained: bool = False, progress: bool = True, pretrained_backbone: bool = True, **kwargs: Any) YOLOv1[source]#

YOLO model from “You Only Look Once: Unified, Real-Time Object Detection”.

YOLO’s particularity is to make predictions in a grid (same size as last feature map). For each grid cell, the model predicts classification scores and a fixed number of boxes (default: 2). Each box in the cell gets 5 predictions: an objectness score, and 4 coordinates. The 4 coordinates are composed of: the 2-D coordinates of the predicted box center (relative to the cell), and the width and height of the predicted box (relative to the whole image).

For training, YOLO uses a multi-part loss whose components are computed by:

\[\mathcal{L}_{coords} = \sum\limits_{i=0}^{S^2} \sum\limits_{j=0}^{B} \mathbb{1}_{ij}^{obj} \Big[ (x_{ij} - \hat{x}_{ij})² + (y_{ij} - \hat{y}_{ij})² + (\sqrt{w_{ij}} - \sqrt{\hat{w}_{ij}})² + (\sqrt{h_{ij}} - \sqrt{\hat{h}_{ij}})² \Big]\]

where \(S\) is size of the output feature map (7 for an input size \((448, 448)\)), \(B\) is the number of anchor boxes per grid cell (default: 2), \(\mathbb{1}_{ij}^{obj}\) equals to 1 if a GT center falls inside the i-th grid cell and among the anchor boxes of that cell, has the highest IoU with the j-th box else 0, \((x_{ij}, y_{ij}, w_{ij}, h_{ij})\) are the coordinates of the ground truth assigned to the j-th anchor box of the i-th grid cell, and \((\hat{x}_{ij}, \hat{y}_{ij}, \hat{w}_{ij}, \hat{h}_{ij})\) are the coordinate predictions for the j-th anchor box of the i-th grid cell.

\[\mathcal{L}_{objectness} = \sum\limits_{i=0}^{S^2} \sum\limits_{j=0}^{B} \Big[ \mathbb{1}_{ij}^{obj} \Big(C_{ij} - \hat{C}_{ij} \Big)^2 + \lambda_{noobj} \mathbb{1}_{ij}^{noobj} \Big(C_{ij} - \hat{C}_{ij} \Big)^2 \Big]\]

where \(\lambda_{noobj}\) is a positive coefficient (default: 0.5), \(\mathbb{1}_{ij}^{noobj} = 1 - \mathbb{1}_{ij}^{obj}\), \(C_{ij}\) equals the Intersection Over Union between the j-th anchor box in the i-th grid cell and its matched ground truth box if that box is matched with a ground truth else 0, and \(\hat{C}_{ij}\) is the objectness score of the j-th anchor box in the i-th grid cell..

\[\mathcal{L}_{classification} = \sum\limits_{i=0}^{S^2} \mathbb{1}_{i}^{obj} \sum\limits_{c \in classes} (p_i(c) - \hat{p}_i(c))^2\]

where \(\mathbb{1}_{i}^{obj}\) equals to 1 if a GT center falls inside the i-th grid cell else 0, \(p_i(c)\) equals 1 if the assigned ground truth to the i-th cell is classified as class \(c\), and \(\hat{p}_i(c)\) is the predicted probability of class \(c\) in the i-th cell.

And the full loss is given by:

\[\mathcal{L}_{YOLOv1} = \lambda_{coords} \cdot \mathcal{L}_{coords} + \mathcal{L}_{objectness} + \mathcal{L}_{classification}\]

where \(\lambda_{coords}\) is a positive coefficient (default: 5).

Parameters:
  • pretrained (bool, optional) – If True, returns a model pre-trained on ImageNet

  • progress (bool, optional) – If True, displays a progress bar of the download to stderr

  • pretrained_backbone (bool, optional) – If True, backbone parameters will have been pretrained on Imagenette

  • kwargs – keyword args of _yolo

Returns:

detection module

Return type:

torch.nn.Module

holocron.models.detection.yolov2(pretrained: bool = False, progress: bool = True, pretrained_backbone: bool = True, **kwargs: Any) YOLOv2[source]#

YOLOv2 model from “YOLO9000: Better, Faster, Stronger”.

YOLOv2 improves upon YOLO by raising the number of boxes predicted by grid cell (default: 5), introducing bounding box priors and predicting class scores for each anchor box in the grid cell.

For training, YOLOv2 uses the same multi-part loss as YOLO apart from its classification loss:

\[\mathcal{L}_{classification} = \sum\limits_{i=0}^{S^2} \sum\limits_{j=0}^{B} \mathbb{1}_{ij}^{obj} \sum\limits_{c \in classes} (p_{ij}(c) - \hat{p}_{ij}(c))^2\]

where \(S\) is size of the output feature map (13 for an input size \((416, 416)\)), \(B\) is the number of anchor boxes per grid cell (default: 5), \(\mathbb{1}_{ij}^{obj}\) equals to 1 if a GT center falls inside the i-th grid cell and among the anchor boxes of that cell, has the highest IoU with the j-th box else 0, \(p_{ij}(c)\) equals 1 if the assigned ground truth to the j-th anchor box of the i-th cell is classified as class \(c\), and \(\hat{p}_{ij}(c)\) is the predicted probability of class \(c\) for the j-th anchor box in the i-th cell.

Parameters:
  • pretrained (bool, optional) – If True, returns a model pre-trained on ImageNet

  • progress (bool, optional) – If True, displays a progress bar of the download to stderr

  • pretrained_backbone (bool, optional) – If True, backbone parameters will have been pretrained on Imagenette

  • kwargs – keyword args of _yolo

Returns:

detection module

Return type:

torch.nn.Module

holocron.models.detection.yolov4(pretrained: bool = False, progress: bool = True, pretrained_backbone: bool = True, **kwargs: Any) YOLOv4[source]#

YOLOv4 model from “YOLOv4: Optimal Speed and Accuracy of Object Detection”.

The architecture improves upon YOLOv3 by including: the usage of DropBlock regularization, Mish activation, CSP and SAM in the backbone, SPP and PAN in the neck.

For training, YOLOv4 uses the same multi-part loss as YOLOv3 apart from its box coordinate loss:

\[\mathcal{L}_{coords} = \sum\limits_{i=0}^{S^2} \sum\limits_{j=0}^{B} \min\limits_{k \in [1, M]} C_{IoU}(\hat{loc}_{ij}, loc^{GT}_k)\]

where \(S\) is size of the output feature map (13 for an input size \((416, 416)\)), \(B\) is the number of anchor boxes per grid cell (default: 3), \(M\) is the number of ground truth boxes, \(C_{IoU}\) is the complete IoU loss, \(\hat{loc}_{ij}\) is the predicted bounding box for grid cell \(i\) at anchor \(j\), and \(loc^{GT}_k\) is the k-th ground truth bounding box.

Parameters:
  • pretrained (bool, optional) – If True, returns a model pre-trained on ImageNet

  • progress (bool, optional) – If True, displays a progress bar of the download to stderr

  • pretrained_backbone (bool, optional) – If True, backbone parameters will have been pretrained on Imagenette

  • kwargs – keyword args of _yolo

Returns:

detection module

Return type:

torch.nn.Module

Semantic Segmentation#

Semantic segmentation models expect a 4D image tensor as an input (N x C x H x W) and returns a classification score tensor of size (N x K x Ho x Wo).

import holocron.models as models
unet = models.unet(num_classes=10)

U-Net#

holocron.models.segmentation.unet(pretrained: bool = False, progress: bool = True, **kwargs: Any) UNet[source]#

U-Net from “U-Net: Convolutional Networks for Biomedical Image Segmentation”

https://github.com/frgfm/Holocron/releases/download/v0.1.3/unet.png
Parameters:
  • pretrained – If True, returns a model pre-trained on PASCAL VOC2012

  • progress – If True, displays a progress bar of the download to stderr

  • kwargs – keyword args of _unet

Returns:

semantic segmentation model

holocron.models.segmentation.unetp(pretrained: bool = False, progress: bool = True, **kwargs: Any) UNetp[source]#

UNet+ from “UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation”

https://github.com/frgfm/Holocron/releases/download/v0.1.3/unetp.png
Parameters:
  • pretrained – If True, returns a model pre-trained on PASCAL VOC2012

  • progress – If True, displays a progress bar of the download to stderr

  • kwargs – keyword args of _unet

Returns:

semantic segmentation model

holocron.models.segmentation.unetpp(pretrained: bool = False, progress: bool = True, **kwargs: Any) UNetpp[source]#

UNet++ from “UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation”

https://github.com/frgfm/Holocron/releases/download/v0.1.3/unetpp.png
Parameters:
  • pretrained – If True, returns a model pre-trained on PASCAL VOC2012

  • progress – If True, displays a progress bar of the download to stderr

  • kwargs – keyword args of _unet

Returns:

semantic segmentation model

holocron.models.segmentation.unet3p(pretrained: bool = False, progress: bool = True, **kwargs: Any) UNet3p[source]#

UNet3+ from “UNet 3+: A Full-Scale Connected UNet For Medical Image Segmentation”

https://github.com/frgfm/Holocron/releases/download/v0.1.3/unet3p.png
Parameters:
  • pretrained – If True, returns a model pre-trained on PASCAL VOC2012

  • progress – If True, displays a progress bar of the download to stderr

  • kwargs – keyword args of _unet

Returns:

semantic segmentation model

holocron.models.segmentation.unet2(pretrained: bool = False, progress: bool = True, in_channels: int = 3, **kwargs: Any) DynamicUNet[source]#

Modified version of U-Net from “U-Net: Convolutional Networks for Biomedical Image Segmentation” that includes a more advanced upscaling block inspired by fastai.

https://github.com/frgfm/Holocron/releases/download/v0.1.3/unet.png
Parameters:
  • pretrained – If True, returns a model pre-trained on PASCAL VOC2012

  • progress – If True, displays a progress bar of the download to stderr

  • in_channels – number of input channels

  • kwargs – keyword args of _dynamic_unet

Returns:

semantic segmentation model

holocron.models.segmentation.unet_tvvgg11(pretrained: bool = False, pretrained_backbone: bool = True, progress: bool = True, **kwargs: Any) DynamicUNet[source]#

U-Net from “U-Net: Convolutional Networks for Biomedical Image Segmentation” with a VGG-11 backbone used as encoder, and more advanced upscaling blocks inspired by fastai.

Parameters:
  • pretrained – If True, returns a model pre-trained on PASCAL VOC2012

  • pretrained_backbone – If True, the encoder will load pretrained parameters from ImageNet

  • progress – If True, displays a progress bar of the download to stderr

  • kwargs – keyword args of _dynamic_unet

Returns:

semantic segmentation model

holocron.models.segmentation.unet_tvresnet34(pretrained: bool = False, pretrained_backbone: bool = True, progress: bool = True, **kwargs: Any) DynamicUNet[source]#

U-Net from “U-Net: Convolutional Networks for Biomedical Image Segmentation” with a ResNet-34 backbone used as encoder, and more advanced upscaling blocks inspired by fastai.

Parameters:
  • pretrained – If True, returns a model pre-trained on PASCAL VOC2012

  • pretrained_backbone – If True, the encoder will load pretrained parameters from ImageNet

  • progress – If True, displays a progress bar of the download to stderr

  • kwargs – keyword args of _dynamic_unet

Returns:

semantic segmentation model

holocron.models.segmentation.unet_rexnet13(pretrained: bool = False, pretrained_backbone: bool = True, progress: bool = True, in_channels: int = 3, **kwargs: Any) DynamicUNet[source]#

U-Net from “U-Net: Convolutional Networks for Biomedical Image Segmentation” with a ReXNet-1.3x backbone used as encoder, and more advanced upscaling blocks inspired by fastai.

Parameters:
  • pretrained – If True, returns a model pre-trained on PASCAL VOC2012

  • pretrained_backbone – If True, the encoder will load pretrained parameters from ImageNet

  • progress – If True, displays a progress bar of the download to stderr

  • in_channels – the number of input channels

  • kwargs – keyword args of _dynamic_unet

Returns:

semantic segmentation model