mindspore.ops.PReLU

class mindspore.ops.PReLU(*args, **kwargs)[source]

Parametric Rectified Linear Unit activation function.

PReLU is described in the paper Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Defined as follows:

\[prelu(x_i)= \max(0, x_i) + \min(0, w * x_i),\]

where \(x_i\) is an element of an channel of the input, w is the weight of the channel.

Note

0-D or 1-D input_x is not supported on Ascend.

Inputs:
  • input_x (Tensor) - The first input tensor. The data type is float16 or float32. Represents the output of the preview layer.

  • weight (Tensor) - The second input tensor. The data type is float16 or float32. There are only two shapes are legitimate, 1 or the number of channels of the input_x. Channel dim is the 2nd dim of input. When input is 0-D or 1-D tensor, the number of channels is 1.

Outputs:

Tensor, with the same type as input_x.

For detailed information, please refer to nn.PReLU.

Raises
  • TypeError – If dtype of input_x or weight is neither float16 nor float32.

  • TypeError – If the input_x or the weight is not a Tensor.

  • ValueError – If the input_x is a 0-D or 1-D Tensor on Ascned.

  • ValueError – If the weight is not a 1-D Tensor.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.prelu = ops.PReLU()
...     def construct(self, input_x, weight):
...         result = self.prelu(input_x, weight)
...         return result
...
>>> input_x = Tensor(np.arange(-6, 6).reshape((2, 3, 2)), mindspore.float32)
>>> weight = Tensor(np.array([0.1, 0.6, -0.3]), mindspore.float32)
>>> net = Net()
>>> output = net(input_x, weight)
>>> print(output)
[[[-0.60 -0.50]
  [-2.40 -1.80]
  [ 0.60  0.30]]
 [[ 0.00  1.00]
  [ 2.00  3.00]
  [ 4.0   5.00]]]