holocron.nn.functional

Non-linear activations

holocron.nn.functional.hard_mish(x: Tensor, inplace: bool = False) Tensor[source]

Implements the HardMish activation function

Parameters:
  • x – input tensor

  • inplace – whether the operation should be conducted inplace

Returns:

output tensor

holocron.nn.functional.nl_relu(x: Tensor, beta: float = 1.0, inplace: bool = False) Tensor[source]

Implements the natural logarithm ReLU activation function

Parameters:
  • x – input tensor

  • beta – beta used for NReLU

  • inplace – whether the operation should be performed inplace

Returns:

output tensor

Loss functions

holocron.nn.functional.focal_loss(x: Tensor, target: Tensor, weight: Tensor | None = None, ignore_index: int = -100, reduction: str = 'mean', gamma: float = 2.0) Tensor[source]

Implements the focal loss from “Focal Loss for Dense Object Detection”

Parameters:
  • x (torch.Tensor[N, K, ...]) – input tensor

  • target (torch.Tensor[N, ...]) – hard target tensor

  • weight (torch.Tensor[K], optional) – manual rescaling of each class

  • ignore_index (int, optional) – specifies target value that is ignored and do not contribute to gradient

  • reduction (str, optional) – reduction method

  • gamma (float, optional) – gamma parameter of focal loss

Returns:

loss reduced with reduction method

Return type:

torch.Tensor

holocron.nn.functional.multilabel_cross_entropy(x: Tensor, target: Tensor, weight: Tensor | None = None, ignore_index: int = -100, reduction: str = 'mean') Tensor[source]

Implements the cross entropy loss for multi-label targets

Parameters:
  • x (torch.Tensor[N, K, ...]) – input tensor

  • target (torch.Tensor[N, K, ...]) – target tensor

  • weight (torch.Tensor[K], optional) – manual rescaling of each class

  • ignore_index (int, optional) – specifies target value that is ignored and do not contribute to gradient

  • reduction (str, optional) – reduction method

Returns:

loss reduced with reduction method

Return type:

torch.Tensor

holocron.nn.functional.complement_cross_entropy(x: Tensor, target: Tensor, weight: Tensor | None = None, ignore_index: int = -100, reduction: str = 'mean', gamma: float = -1) Tensor[source]

Implements the complement cross entropy loss from “Imbalanced Image Classification with Complement Cross Entropy”

Parameters:
  • x (torch.Tensor[N, K, ...]) – input tensor

  • target (torch.Tensor[N, ...]) – target tensor

  • weight (torch.Tensor[K], optional) – manual rescaling of each class

  • ignore_index (int, optional) – specifies target value that is ignored and do not contribute to gradient

  • reduction (str, optional) – reduction method

  • gamma (float, optional) – complement factor

Returns:

loss reduced with reduction method

Return type:

torch.Tensor

holocron.nn.functional.dice_loss(x: Tensor, target: Tensor, weight: Tensor | None = None, gamma: float = 1.0, eps: float = 1e-08) Tensor[source]

Implements the dice loss from “V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation”

Parameters:
  • x (torch.Tensor[N, K, ...]) – predicted probability

  • target (torch.Tensor[N, K, ...]) – target probability

  • weight (torch.Tensor[K], optional) – manual rescaling of each class

  • gamma (float, optional) – controls the balance between recall (gamma > 1) and precision (gamma < 1)

  • eps (float, optional) – epsilon to balance the loss and avoids division by zero

Returns:

loss reduced with reduction method

Return type:

torch.Tensor

..autofunction:: poly_loss

Convolutions

holocron.nn.functional.norm_conv2d(x: Tensor, weight: Tensor, bias: Tensor | None = None, stride: int | Tuple[int, int] = 1, padding: int | Tuple[int, int] = 0, dilation: int | Tuple[int, int] = 1, groups: int = 1, eps: float = 1e-14) Tensor[source]

Implements a normalized convolution operations in 2D. Based on the implementation by the paper’s author. See NormConv2d for details and output shape.

Parameters:
  • x (torch.Tensor[N, in_channels, H, W]) – input tensor

  • weight (torch.Tensor[out_channels, in_channels, Kh, Kw]) – filters

  • bias (torch.Tensor[out_channels], optional) – optional bias tensor of shape (out_channels). Default: None

  • stride (int, optional) – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

  • padding (int, optional) – implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0

  • dilation (int, optional) – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

  • groups (int, optional) – split input into groups, in_channels should be divisible by the number of groups. Default: 1

  • eps (float, optional) – a value added to the denominator for numerical stability. Default: 1e-14

Examples::
>>> # With square kernels and equal stride
>>> filters = torch.randn(8,4,3,3)
>>> inputs = torch.randn(1,4,5,5)
>>> F.norm_conv2d(inputs, filters, padding=1)
holocron.nn.functional.add2d(x: Tensor, weight: Tensor, bias: Tensor | None = None, stride: int | Tuple[int, int] = 1, padding: int | Tuple[int, int] = 0, dilation: int | Tuple[int, int] = 1, groups: int = 1, normalize_slices: bool = False, eps: float = 1e-14) Tensor[source]

Implements an adder operation in 2D from “AdderNet: Do We Really Need Multiplications in Deep Learning?”. See Add2d for details and output shape.

Parameters:
  • x (torch.Tensor[N, in_channels, H, W]) – input tensor

  • weight (torch.Tensor[out_channels, in_channels, Kh, Kw]) – filters

  • bias (torch.Tensor[out_channels], optional) – optional bias tensor of shape (out_channels). Default: None

  • stride (int, optional) – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

  • padding (int, optional) – implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0

  • dilation (int, optional) – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

  • groups (int, optional) – split input into groups, in_channels should be divisible by the number of groups. Default: 1

  • normalize_slices (bool, optional) – whether input slices should be normalized

  • eps (float, optional) – a value added to the denominator for numerical stability. Default: 1e-14

Examples::
>>> # With square kernels and equal stride
>>> filters = torch.randn(8,4,3,3)
>>> inputs = torch.randn(1,4,5,5)
>>> F.norm_conv2d(inputs, filters, padding=1)

Regularization layers

holocron.nn.functional.dropblock2d(x: Tensor, drop_prob: float, block_size: int, inplace: bool = False, training: bool = True) Tensor[source]

Implements the dropblock operation from “DropBlock: A regularization method for convolutional networks”

Parameters:
  • x (torch.Tensor) – input tensor of shape (N, C, H, W)

  • drop_prob (float) – probability of dropping activation value

  • block_size (int) – size of each block that is expended from the sampled mask

  • inplace (bool, optional) – whether the operation should be done inplace

  • training (bool, optional) – whether the input should be processed in training mode

Downsampling

holocron.nn.functional.concat_downsample2d(x: Tensor, scale_factor: int) Tensor[source]

Implements a loss-less downsampling operation described in “YOLO9000: Better, Faster, Stronger” by stacking adjacent information on the channel dimension.

Parameters:
  • x (torch.Tensor[N, C, H, W]) – input tensor

  • scale_factor (int) – spatial scaling factor

Returns:

downsampled tensor

Return type:

torch.Tensor[N, scale_factor ** 2 * C, H / scale_factor, W / scale_factor]

holocron.nn.functional.z_pool(x: Tensor, dim: int) Tensor[source]

Z-pool layer from “Rotate to Attend: Convolutional Triplet Attention Module”.

Parameters:
  • x – input tensor

  • dim – dimension to pool