Differences with torch.nn.functional.kl_div

View Source On Gitee

torch.nn.functional.kl_div

torch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction='mean', log_target=False)

For more information, see torch.nn.functional.kl_div.

mindspore.ops.kl_div

mindspore.ops.kl_div(logits, labels, reduction='mean')

For more information, see mindspore.ops.kl_div.

Differnnces

PyTorch: Compute the KL divergence of logits and labels, log_target is the flag Indicates whether the target is passed to the log space.

MindSpore: MindSpore API basically implements the same function as PyTorch, but the log_target is not defined.

Categories

Subcategories

PyTorch

MindSpore

Difference

Parameters

Parameter 1

input

logits

both are input Tensors

Parameter 2

target

labels

both are input Tensors

Parameter 3

size_average

-

Same function. PyTorch has deprecated this parameter, and MindSpore does not have this parameter.

Parameter 4

reduce

-

Same function. PyTorch has deprecated this parameter, and MindSpore does not have this parameter.

Parameter 5

reduction

reduction

Same function, different default values.

Parameter 6

log_target

-

parameter not defined

Code Example

# PyTorch
import torch
import numpy as np

logits = torch.tensor(np.array([0.2, 0.7, 0.1]))
labels = torch.tensor(np.array([0., 1., 0.]))
output = torch.nn.functional.kl_div(logits, labels)
print(output)
# tensor(-0.2333, dtype=torch.float64)

# MindSpore
import mindspore
from mindspore import Tensor
import numpy as np

logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
output = mindspore.ops.kl_div(logits, labels, 'mean')
print(output)
# -0.23333333