mindsponge.cell.MSAColumnAttention
- class mindsponge.cell.MSAColumnAttention(num_head, key_dim, gating, msa_act_dim, batch_size=None, slice_num=0)[source]
MSA column-wise gated self attention. The column-wise attention lets the elements that belong to the same target residue exchange information.
- Parameters
num_head (int) – The number of the heads.
key_dim (int) – The dimension of the input.
gating (bool) – Indicator of if the attention is gated.
msa_act_dim (int) – The dimension of the msa_act. The intermediate variable after MSA retrieving in AlphaFold.
batch_size (int) – The batch size of parameters in MSAColumnAttention, used in while control flow, Default: "None".
slice_num (int) – The number of slices to be made to reduce memory, Default: 0.
- Inputs:
msa_act (Tensor) - Tensor of msa_act. The intermediate variable after MSA retrieving in AlphaFold, shape \([N_{seqs}, N_{res}, C_m]\) .
msa_mask (Tensor) - The mask for MSAColumnAttention matrix, shape \([N_{seqs}, N_{res}]\).
index (Tensor) - The index of while loop, only used in case of while control flow. Default: "None".
- Outputs:
Tensor, the float tensor of the msa_act of the layer, shape \([N_{seqs}, N_{res}, C_m]\).
- Supported Platforms:
Ascend
GPU
Examples
>>> import numpy as np >>> from mindsponge.cell import MSAColumnAttention >>> from mindspore import dtype as mstype >>> from mindspore import Tensor >>> model = MSAColumnAttention(num_head=8, key_dim=256, gating=True, ... msa_act_dim=256, batch_size=1, slice_num=0) >>> msa_act = Tensor(np.ones((512, 256, 256)), mstype.float32) >>> msa_mask = Tensor(np.ones((512, 256)), mstype.float32) >>> index = Tensor(0, mstype.int32) >>> attn_out = model(msa_act, msa_mask, index) >>> print(attn_out.shape) (512, 256, 256)