mindquantum.framework.MQN2Layer

class mindquantum.framework.MQN2Layer(expectation_with_grad, weight='normal')[source]

MindQuantum trainable layer. The parameters of ansatz circuit are trainable parameters. This layer will calculate the square of absolute value of expectation automatically.

Parameters
  • expectation_with_grad (GradOpsWrapper) – a grad ops that receive encoder data and ansatz data and return the square of absolute value of expectation value and gradient value of parameters respect to expectation.

  • weight (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the convolution kernel. It can be a Tensor, a string, an Initializer or a number. When a string is specified, values from ‘TruncatedNormal’, ‘Normal’, ‘Uniform’, ‘HeUniform’ and ‘XavierUniform’ distributions as well as constant ‘One’ and ‘Zero’ distributions are possible. Alias ‘xavier_uniform’, ‘he_uniform’, ‘ones’ and ‘zeros’ are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: ‘normal’.

Inputs:
  • enc_data (Tensor) - Tensor of encoder data that you want to encode into quantum state.

Outputs:

Tensor, The square of absolute value of expectation value of the hamiltonian.

Raises

ValueError – If length of shape of weight is not equal to 1 and shape[0] of weight is not equal to weight_size.

Supported Platforms:

GPU, CPU

Examples

>>> import numpy as np
>>> from mindquantum import Circuit, Hamiltonian, QubitOperator
>>> from mindquantum import Simulator, MQN2Layer
>>> import mindspore as ms
>>> ms.set_seed(42)
>>> ms.context.set_context(mode=ms.context.PYNATIVE_MODE, device_target="CPU")
>>> enc = Circuit().ry('a', 0)
>>> ans = Circuit().h(0).rx('b', 0)
>>> ham = Hamiltonian(QubitOperator('Z0'))
>>> sim = Simulator('projectq', 1)
>>> grad_ops = sim.get_expectation_with_grad(ham, enc+ans,
...                                          encoder_params_name=['a'],
...                                          ansatz_params_name=['b'])
>>> enc_data = ms.Tensor(np.array([[0.1]]))
>>> net =  MQN2Layer(grad_ops)
>>> opti = ms.nn.Adam(net.trainable_params(), learning_rate=0.1)
>>> train_net = ms.nn.TrainOneStepCell(net, opti)
>>> for i in range(100):
...     train_net(enc_data)
>>> net.weight.asnumpy()
array([1.5646162], dtype=float32)
>>> net(enc_data)
Tensor(shape=[1, 1], dtype=Float32, value=
[[ 3.80662982e-07]])