下面的代码(取自这里)似乎只实现了一个简单的 Dropout
,既不是 DropPath
也不是 DropConnect
。真的吗?
def drop_path(x, drop_prob: float = 0., training: bool = False):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
'survival rate' as the argument.
"""
if drop_prob == 0. or not training:
return x
keep_prob = 1 - drop_prob
shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
random_tensor.floor_() # binarize
output = x.div(keep_prob) * random_tensor
return output
不,它与 Dropout
不同:
import torch
from torch.nn.functional import dropout
torch.manual_seed(2021)
def drop_path(x, drop_prob: float = 0., training: bool = False):
if drop_prob == 0. or not training:
return x
keep_prob = 1 - drop_prob
shape = (x.shape[0],) + (1,) * (x.ndim - 1)
random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
random_tensor.floor_() # binarize
output = x.div(keep_prob) * random_tensor
return output
x = torch.rand(3, 2, 2, 2)
# DropPath
d1_out = drop_path(x, drop_prob=0.33, training=True)
# Dropout
d2_out = dropout(x, p=0.33, training=True)
让我们比较一下输出(为了便于阅读,我删除了通道维度之间的换行符):
# DropPath
print(d1_out)
# tensor([[[[0.1947, 0.7662],
# [1.1083, 1.0685]],
# [[0.8515, 0.2467],
# [0.0661, 1.4370]]],
#
# [[[0.0000, 0.0000],
# [0.0000, 0.0000]],
# [[0.0000, 0.0000],
# [0.0000, 0.0000]]],
#
# [[[0.7658, 0.4417],
# [1.1692, 1.1052]],
# [[1.2014, 0.4532],
# [1.4840, 0.7499]]]])
# Dropout
print(d2_out)
# tensor([[[[0.1947, 0.7662],
# [1.1083, 1.0685]],
# [[0.8515, 0.2467],
# [0.0661, 1.4370]]],
#
# [[[0.0000, 0.1480],
# [1.2083, 0.0000]],
# [[1.2272, 0.1853],
# [0.0000, 0.5385]]],
#
# [[[0.7658, 0.0000],
# [1.1692, 1.1052]],
# [[1.2014, 0.4532],
# [0.0000, 0.7499]]]])
如您所见,它们是不同的。DropPath
正在从批次中删除整个样本,这在用于方程式时有效地导致随机深度。2 他们的论文。另一方面,Dropout
正在丢弃随机值,正如预期的那样(来自文档):
在训练期间,使用来自伯努利分布的样本以概率
p
将输入张量的一些元素随机归零。每个通道将在每次前转呼叫时独立归零。
另请注意,两者都基于概率缩放输出值,即,对于相同的 p
,非归零元素是相同的。