mindspore.nn.PoissonNLLLoss
- class mindspore.nn.PoissonNLLLoss(log_input=True, full=False, eps=1e-08, reduction='mean')[source]
Poisson negative log likelihood loss.
The loss is:
\[\mathcal{L}_{D} = \sum_{i = 0}^{|D|}\left( x_{i} - y_{i}\ln x_{i} + \ln{y_{i}!} \right)\]where \(\mathcal{L}_{D}\) is the loss, \(y_{i}\) is the target, \(x_{i}\) is the input.
If log_input is True, use \(e^{x_{i}} - y_{i} x_{i}\) instead of \(x_{i} - y_{i}\ln x_{i}\). When calculating logarithms, the lower bound of input is set to eps to avoid numerical errors.
If full is False, the last term \(\ln{y_{i}!}\) will be omitted, otherwise the last term will be approximated using Stirling formula:
\[n! \approx \sqrt{2\pi n}\left( \frac{n}{e} \right)^{n}\]Note
Calculating the logarithm of a negative number or the exponent of a large positive number under Ascend will have a different range of return values and results different from those under GPU and CPU.
- Parameters
log_input (bool, optional) – Whether use log input. Default:
True
.full (bool, optional) – Whether include the Stirling approximation term in the loss calculation. Default:
False
.eps (float, optional) – Lower bound of input when calculating logarithms. Default:
1e-08
.reduction (str, optional) – Apply specific reduction method to the output:
'none'
,'mean'
,'sum'
. Default:'mean'
.
- Inputs:
input (Tensor) - The input Tensor. The shape can be any number of dimensions.
target (Tensor) - The label Tensor which has the same shape as input.
- Outputs:
Tensor or Scalar, if reduction is ‘none’, then output is a tensor and has the same shape as input. Otherwise it is a scalar.
- Raises
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore as ms >>> import mindspore.nn as nn >>> x = ms.Tensor([[0.3, 0.7], [0.5, 0.5]]) >>> target = ms.Tensor([[1.0, 2.0], [3.0, 4.0]]) >>> loss = nn.PoissonNLLLoss() >>> output = loss(x, target) >>> print(output.asnumpy()) 0.3652635