mindspore.nn.RMSELoss
- class mindspore.nn.RMSELoss[source]
RMSELoss creates a standard to measure the root mean square error between \(x\) and \(y\) element-wise, where \(x\) is the input and \(y\) is the target.
For simplicity, let \(x\) and \(y\) be 1-dimensional Tensor with length \(M\) and \(N\), the unreduced loss (i.e. with argument reduction set to ‘none’) of \(x\) and \(y\) is given as:
\[\begin{split}loss = \begin{cases} \sqrt{\frac{1}{M}\sum_{m=1,n=1}^{M,N}{(x_m-y_n)^2}}, & \text {if M > N } \\\\ \sqrt{\frac{1}{N}\sum_{m=1,n=1}^{M,N}{(x_m-y_n)^2}}, &\text{if M < N } \end{cases}\end{split}\]- Inputs:
logits (Tensor) - Tensor of shape \((x_1, x_2, ..., x_M)\).
labels (Tensor) - Tensor of shape \((y_1, y_2, ..., y_N)\).
- Outputs:
Tensor, weighted loss float tensor.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> loss = nn.RMSELoss() >>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32) >>> output = loss(logits, labels) >>> print(output) 0.57735026