mindsponge.cell.InvariantPointAttention
- class mindsponge.cell.InvariantPointAttention(num_head, num_scalar_qk, num_scalar_v, num_point_v, num_point_qk, num_channel, pair_dim)[source]
Invariant Point attention module. This module is used to update the sequence representation ,which is the first input–inputs_1d, adding location information to the sequence representation.
The attention consists of three parts, namely, q, k, v obtained by the sequence representation, q’k’v’ obtained by the interaction between the sequence representation and the rigid body group, and b , which is th bias, obtained from the pair representation (the second inputs – inputs_2d).
\[a_{ij} = Softmax(w_l(c_1{q_i}^Tk_j+b{ij}-c_2\sum {\left \| T_i\circ q'_i-T_j\circ k'_j \right \| ^{2 } }))\]where i and j represent the ith and jth amino acids in the sequence, respectively, and T is the rotation and translation in the input.
Jumper et al. (2021) Suppl. Alg. 22 “InvariantPointAttention”.
- Parameters
num_head (int) – The number of the heads.
num_scalar_qk (int) – The number of the scalar query/key.
num_scalar_v (int) – The number of the scalar value.
num_point_v (int) – The number of the point value.
num_point_qk (int) – The number of the point query/key.
num_channel (int) – The number of the channel.
pair_dim (int) – The last dimension length of pair.
- Inputs:
inputs_1d (Tensor) - The first row of msa representation which is the output of evoformer module, also called the sequence representation, shape \([N_{res}, num\_channel]\).
inputs_2d (Tensor) - The pair representation which is the output of evoformer module, shape \([N_{res}, N_{res}, pair\_dim]\).
mask (Tensor) - A mask that determines which elements of inputs_1d are involved in the attention calculation, shape \([N_{res}, 1]\)
rotation (tuple) - A rotation term in a rigid body group T(r,t), A tuple of length 9, The shape of each elements in the tuple is \([N_{res}]\).
translation (tuple) - A translation term in a rigid body group T(r,t), A tuple of length 3, The shape of each elements in the tuple is \([N_{res}]\).
- Outputs:
Tensor, the update of inputs_1d, shape \([N_{res}, channel]\).
- Supported Platforms:
Ascend
GPU
Examples
>>> import numpy as np >>> from mindsponge.cell import InvariantPointAttention >>> from mindspore import dtype as mstype >>> from mindspore import Tensor >>> import mindspore.context as context >>> context.set_context(mode=context.GRAPH_MODE) >>> model = InvariantPointAttention(num_head=12, num_scalar_qk=16, num_scalar_v=16, ... num_point_v=8, num_point_qk=4, ... num_channel=384, pair_dim=128) >>> inputs_1d = Tensor(np.ones((256, 384)), mstype.float32) >>> inputs_2d = Tensor(np.ones((256, 256, 128)), mstype.float32) >>> mask = Tensor(np.ones((256, 1)), mstype.float32) >>> rotation = tuple([Tensor(np.ones(256), mstype.float16) for _ in range(9)]) >>> translation = tuple([Tensor(np.ones(256), mstype.float16) for _ in range(3)]) >>> attn_out = model(inputs_1d, inputs_2d, mask, rotation, translation) >>> print(attn_out.shape) (256, 384)