mindquantum.framework.MQLayer

View Source On Gitee
class mindquantum.framework.MQLayer(expectation_with_grad, weight='normal')[source]

Quantum neural network include encoder and ansatz circuit.

The encoder circuit encode classical data into quantum state, while the ansatz circuit act as trainable circuit.

Parameters
  • expectation_with_grad (GradOpsWrapper) – a grad ops that receive encoder data and ansatz data and return the expectation value and gradient value of parameters respect to expectation.

  • weight (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the convolution kernel. It can be a Tensor, a string, an Initializer or a number. When a string is specified, values from 'TruncatedNormal', 'Normal', 'Uniform', 'HeUniform' and 'XavierUniform' distributions as well as constant 'One' and 'Zero' distributions are possible. Alias 'xavier_uniform', 'he_uniform', 'ones' and 'zeros' are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: 'normal'.

Inputs:
  • enc_data (Tensor) - Tensor of encoder data that you want to encode into quantum state.

Outputs:

Tensor, The expectation value of the hamiltonian.

Raises

ValueError – If length of shape of weight is not equal to 1 or shape[0] of weight is not equal to weight_size.

Supported Platforms:

GPU, CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindquantum.core.circuit import Circuit
>>> from mindquantum.core.operators import Hamiltonian, QubitOperator
>>> from mindquantum.framework import MQLayer
>>> from mindquantum.simulator import Simulator
>>> ms.set_seed(42)
>>> ms.set_context(mode=ms.PYNATIVE_MODE, device_target="CPU")
>>> enc = Circuit().ry('a', 0).as_encoder()
>>> ans = Circuit().h(0).rx('b', 0).as_ansatz()
>>> ham = Hamiltonian(QubitOperator('Z0'))
>>> sim = Simulator('mqvector', 1)
>>> grad_ops = sim.get_expectation_with_grad(ham, enc + ans)
>>> enc_data = ms.Tensor(np.array([[0.1]]))
>>> net =  MQLayer(grad_ops)
>>> opti = ms.nn.Adam(net.trainable_params(), learning_rate=0.1)
>>> train_net = ms.nn.TrainOneStepCell(net, opti)
>>> for i in range(100):
...     train_net(enc_data)
>>> net.weight.asnumpy()
array([3.1423748], dtype=float32)
>>> net(enc_data)
Tensor(shape=[1, 1], dtype=Float32, value=
[[-9.98333842e-02]])