Differences with torch.nn.BatchNorm3d
torch.nn.BatchNorm3d
class torch.nn.BatchNorm3d(
num_features,
eps=1e-05,
momentum=0.1,
affine=True,
track_running_stats=True
)(input) -> Tensor
For more information, see torch.nn.BatchNorm3d.
mindspore.nn.BatchNorm3d
class mindspore.nn.BatchNorm3d(
num_features,
eps=1e-5,
momentum=0.9,
affine=True,
gamma_init='ones',
beta_init='zeros',
moving_mean_init='zeros',
moving_var_init='ones',
use_batch_statistics=None
)(x) -> Tensor
For more information, see mindspore.nn.BatchNorm3d.
Differences
PyTorch: Apply batch normalization on five-dimensional inputs (three-dimensional input with additional mini-batch and channel channels) to avoid internal covariate bias.
MindSpore:The function of this API is basically the same as that of PyTorch, with two typical differences. The default value of the momentum parameter in MindSpore is 0.9, and the momentum conversion relationship with PyTorch is 1-momentum. The behavior of the default value is the same as that of PyTorch. The parameter update strategy during training and inference is different from that of PyTorch.
Categories |
Subcategories |
PyTorch |
MindSpore |
Differences |
---|---|---|---|---|
Parameters |
Parameter 1 |
num_features |
num_features |
- |
Parameter 2 |
eps |
eps |
- |
|
Parameter 3 |
momentum |
momentum |
The function is the same, but the default value in PyTorch is 0.1, and in MindSpore is 0.9, the conversion relationship with PyTorch’s momentum is 1-momentum, and the default value behavior is the same as PyTorch |
|
Parameter 4 |
affine |
affine |
- |
|
Parameter 5 |
track_running_stats |
use_batch_statistics |
The function is the same, and different values correspond to different default methods. |
|
Parameter 6 |
- |
gamma_init |
The initialization method of the γ parameter, default value: “ones”. |
|
Parameter 7 |
- |
beta_init |
The initialization method of the β parameter, default value: “zeros”. |
|
Parameter 8 |
- |
moving_mean_init |
Initialization method of dynamic average, default value: “zeros”. |
|
Parameter 9 |
- |
moving_var_init |
Initialization method of dynamic variance, default value: “ones”. |
|
Input |
Single input |
input |
x |
Interface input, same function, only different parameter names |
The detailed differences are as follows: BatchNorm is a special regularization method in the CV field. It has different computation processes during training and inference and is usually controlled by operator attributes. BatchNorm of MindSpore and PyTorch uses two different parameter groups at this point.
Difference 1
torch.nn.BatchNorm3d
status under different parameterstraining
track_running_stats
Status
True
True
Expected training status.
running_mean
andrunning_var
trace the statistical features of the batch in the entire training process. Each group of input data is normalized based on the mean and var statistical features of the current batch, and thenrunning_mean
andrunning_var
are updated.True
False
Each group of input data is normalized based on the statistics feature of the current batch, but the
running_mean
andrunning_var
parameters do not exist.False
True
Expected inference status. The BN uses
running_mean
andrunning_var
for normalization and does not update them.False
False
The effect is the same as that of the second status. The only difference is that this is the inference status and does not learn the weight and bias parameters. Generally, this status is not used.
mindspore.nn.BatchNorm3d
status under different parametersuse_batch_statistics
Status
True
Expected training status.
moving_mean
andmoving_var
trace the statistical features of the batch in the entire training process. Each group of input data is normalized based on the mean and var statistical features of the current batch, and thenmoving_mean
andmoving_var
are updated.Fasle
Expected inference status. The BN uses
moving_mean
andmoving_var
for normalization and does not update them.None
use_batch_statistics
is automatically set. For training, setuse_batch_statistics
toTrue
. For inference, setuse_batch_statistics
toFalse
.Compared with
torch.nn.BatchNorm3d
,mindspore.nn.BatchNorm3d
does not have two redundant states and retains only the most commonly used training and inference states.Difference 2
In PyTorch, the network is in training mode by default, while in MindSpore, it is in inference mode by default (
is_training
is False). You need to use thenet.set_train()
method in MindSpore to switch the network to training mode. In this case, the parametersmean
andvariance
are calculated during the training. Otherwise, in inference mode, the parameters are loaded from the checkpoint.Difference 3
The meaning of the momentum parameter of the BatchNorm series operators in MindSpore is opposite to that in PyTorch. The relationship is as follows:
\[momentum_{pytorch} = 1 - momentum_{mindspore}\]
Code Example
In PyTorch, the value after 1-momentum is equal to the momentum of MindSpore, both trained by using mini-batch data and learning parameters.
# PyTorch
from torch import nn, tensor
import numpy as np
m = nn.BatchNorm3d(num_features=2, momentum=0.1)
input_x = tensor(np.array([[[[[0.1, 0.2], [0.3, 0.4]]],
[[[0.9, 1], [1.1, 1.2]]]]]).astype(np.float32))
output = m(input_x)
print(output.detach().numpy())
# [[[[[-1.3411044 -0.44703478]
# [ 0.4470349 1.3411044 ]]]
#
#
# [[[-1.3411034 -0.44703388]
# [ 0.44703573 1.3411053 ]]]]]
# MindSpore
from mindspore import Tensor, nn
import numpy as np
m = nn.BatchNorm3d(num_features=2, momentum=0.9)
m.set_train()
# BatchNorm3d<
# (bn2d): BatchNorm2d<num_features=2, eps=1e-05, momentum=0.9, gamma=Parameter (name=bn2d.gamma, shape=(2,), dtype=Float32, requires_grad=True), beta=Parameter (name=bn2d.beta, shape=(2,), dtype=Float32, requires_grad=True), moving_mean=Parameter (name=bn2d.moving_mean, shape=(2,), dtype=Float32, requires_grad=False), moving_variance=Parameter (name=bn2d.moving_variance, shape=(2,), dtype=Float32, requires_grad=False)>
# >
input_x = Tensor(np.array([[[[[0.1, 0.2], [0.3, 0.4]]],
[[[0.9, 1], [1.1, 1.2]]]]]).astype(np.float32))
output = m(input_x)
print(output)
# [[[[[-1.3411044 -0.44703478]
# [ 0.4470349 1.3411044 ]]]
#
#
# [[[-1.3411039 -0.44703427]
# [ 0.44703534 1.341105 ]]]]]