mindspore.ops.kernel

View Source On Gitee
mindspore.ops.kernel(fn=None, reg_info=None, compile_attrs=None)[source]

The decorator of the Hybrid DSL function for the Custom Op. When a function written by the Hybrid DSL is decorated by kernel, it can be run as a usual Python function. Also, this function can be used in the api Custom and to create mindspore.ops.Custom, with func_type “hybrid” or “pyfunc”. Creating mindspore.ops.Custom with mode “hybrid” by the Hybrid DSL function will enjoy the automatic dtype/shape infer for free.

Parameters
  • fn (Function) – The Python function that will be run as a custom operator. Default: None .

  • reg_info (tuple[str, dict]) – Each item represents registration information in json format. Default: None .

  • compile_attrs (Dict) – The Python object is used to distinguish the compiled function. Default: None .

Returns

Function, if fn is not None, returns a callable function that will execute the Hybrid DSL function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import ops, Tensor
>>> from mindspore.ops import kernel, DataType, CustomRegOp
...
>>> # Create a dict for the compile flags.
>>> attrs = {
...     "test1": True,
...     "test2": "good",
...     "test3": 12,
... }
>>> # Create the reg info json string.
>>> op_gpu_info = CustomRegOp() \
...     .input(0, "a") \
...     .input(0, "b") \
...     .output(0, "y") \
...     .dtype_format(DataType.F32_None, DataType.F32_None, DataType.F32_None) \
...     .target("GPU") \
...     .get_op_info()
>>>
>>> # Create inputs for the custom op.
>>> input_x = np.ones([4, 4]).astype(np.float32)
>>> input_y = np.ones([4, 4]).astype(np.float32)
...
>>> # Write a Hybrid DSL function through the decorator @kernel.
>>> # We can also pass the compile attrs and the reg info through the decorator.
>>> @kernel(reg_info=op_gpu_info, compile_attrs=attrs)
... def outer_product(a, b):
...     c = output_tensor(a.shape, a.dtype)
...
...     with block_realize(c):
...         for i0 in range(a.shape[0]):
...             for i1 in range(b.shape[1]):
...                 c[i0, i1] = 0.0
...                 for i2 in range(a.shape[1]):
...                     c[i0, i1] = c[i0, i1] + (a[i0, i2] * b[i2, i1])
...     return c
...
>>> # We can use the function directly as a python function.
>>> # In this case, the inputs should be numpy arrays.
>>> result = outer_product(input_x, input_y)
...
>>> # Create a custom op with mode "hybrid" (default value) by the Hybrid DSL function.
>>> # In this case, we will enjoy the automatic dtype/shape infer for free.
>>> # The inputs should be mindspore tensors.
>>> test_op_hybrid = ops.Custom(outer_product)
>>> output = test_op_hybrid(Tensor(input_x), Tensor(input_y))