holocron.nn.functional¶
Non-linear activations¶
- holocron.nn.functional.mish(x)[source]¶
Implements the Mish activation function
- Parameters:
x (torch.Tensor) – input tensor
- Returns:
output tensor
- Return type:
torch.Tensor[x.size()]
- holocron.nn.functional.nl_relu(x, beta=1.0, inplace=False)[source]¶
Implements the natural logarithm ReLU activation function
- Parameters:
x (torch.Tensor) – input tensor
beta (float) – beta used for NReLU
inplace (bool) – whether the operation should be performed inplace
- Returns:
output tensor
- Return type:
torch.Tensor[x.size()]
Loss functions¶
- holocron.nn.functional.focal_loss(x, target, weight=None, ignore_index=-100, reduction='mean', gamma=2)[source]¶
Implements the focal loss from “Focal Loss for Dense Object Detection”
- Parameters:
x (torch.Tensor[N, K, ...]) – input tensor
target (torch.Tensor[N, ...]) – hard target tensor
weight (torch.Tensor[K], optional) – manual rescaling of each class
ignore_index (int, optional) – specifies target value that is ignored and do not contribute to gradient
reduction (str, optional) – reduction method
gamma (float, optional) – gamma parameter of focal loss
- Returns:
loss reduced with reduction method
- Return type:
- holocron.nn.functional.multilabel_cross_entropy(x, target, weight=None, ignore_index=-100, reduction='mean')[source]¶
Implements the cross entropy loss for multi-label targets
- Parameters:
x (torch.Tensor[N, K, ...]) – input tensor
target (torch.Tensor[N, K, ...]) – target tensor
weight (torch.Tensor[K], optional) – manual rescaling of each class
ignore_index (int, optional) – specifies target value that is ignored and do not contribute to gradient
reduction (str, optional) – reduction method
- Returns:
loss reduced with reduction method
- Return type:
- holocron.nn.functional.ls_cross_entropy(x, target, weight=None, ignore_index=-100, reduction='mean', eps=0.1)[source]¶
Implements the label smoothing cross entropy loss from “Attention Is All You Need”
- Parameters:
x (torch.Tensor[N, K, ...]) – input tensor
target (torch.Tensor[N, ...]) – target tensor
weight (torch.Tensor[K], optional) – manual rescaling of each class
ignore_index (int, optional) – specifies target value that is ignored and do not contribute to gradient
reduction (str, optional) – reduction method
eps (float, optional) – smoothing factor
- Returns:
loss reduced with reduction method
- Return type:
Downsampling¶
- holocron.nn.functional.concat_downsample2d(x, scale_factor)[source]¶
Implements a loss-less downsampling operation described in “YOLO9000: Better, Faster, Stronger” by stacking adjacent information on the channel dimension.
- Parameters:
x (torch.Tensor[N, C, H, W]) – input tensor
scale_factor (int) – spatial scaling factor
- Returns:
downsampled tensor
- Return type:
torch.Tensor[N, 4C, H / 2, W / 2]