Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

PR

Just a small problem.

I can fix it online!

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindsponge.cell.MSARowAttentionWithPairBias

View Source On Gitee
class mindsponge.cell.MSARowAttentionWithPairBias(num_head, key_dim, gating, msa_act_dim, pair_act_dim, batch_size=None, slice_num=0)[source]

MSA row attention. Information from pair action value is made as the bias of the matrix of MSARowAttention, in order to update the state of MSA using pair information.

Reference:

Jumper et al. (2021) Suppl. Alg. 7 'MSARowAttentionWithPairBias'.

Parameters
  • num_head (int) – The number of the attention head.

  • key_dim (int) – The dimension of the attention hidden layer.

  • gating (bool) – Indicator of if the attention is gated.

  • msa_act_dim (int) – The dimension of the msa_act.

  • pair_act_dim (int) – The dimension of the pair_act.

  • batch_size (int) – The batch size of parameters in MSA row attention, used in while control flow. Default: None.

  • slice_num (int) – The number of slices to be made to reduce memory. Default: 0.

Inputs:
  • msa_act (Tensor) - Tensor of msa_act with shape (Nseqs,Nres,msa_act_dim) .

  • msa_mask (Tensor) - The mask for MSA row attention matrix with shape (Nseqs,Nres) .

  • pair_act (Tensor) - Tensor of pair_act with shape (Nres,Nres,pair_act_dim) . Data type is float.

  • index (Tensor) - The index of while loop, only used in case of while control flow. Default: None.

  • norm_msa_mask (Tensor) - The mask of msa_act when to do layernorm with shape (Nseqs,Nres), Default: None.

  • norm_pair_mask (Tensor) - The mask of pair_act when to do layernorm with shape (Nres,Nres), Default: None.

  • res_idx (Tensor) - The residue index used to perform ROPE with shape (Nres), Default: None.

Outputs:

Tensor, the float tensor of the msa_act of the layer with shape (Nseqs,Nres,msa_act_dim) .

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindsponge.cell import MSARowAttentionWithPairBias
>>> from mindspore import dtype as mstype
>>> from mindspore import Tensor
>>> model = MSARowAttentionWithPairBias(num_head=4, key_dim=4, gating=True,
...                                     msa_act_dim=64, pair_act_dim=128,
...                                     batch_size=None)
>>> msa_act = Tensor(np.ones((4, 256, 64)), mstype.float32)
>>> msa_mask = Tensor(np.ones((4, 256)), mstype.float16)
>>> pair_act = Tensor(np.ones((256, 256, 128)), mstype.float32)
>>> index = None
>>> msa_out = model(msa_act, msa_mask, pair_act, index)
>>> print(msa_out.shape)
(4, 256, 64)