Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

PR

Just a small problem.

I can fix it online!

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindsponge.cell.InvariantPointAttention

View Source On Gitee
class mindsponge.cell.InvariantPointAttention(num_head, num_scalar_qk, num_scalar_v, num_point_v, num_point_qk, num_channel, pair_dim)[source]

Invariant Point attention module. This module is used to update the sequence representation ,which is the first input–inputs_1d, adding location information to the sequence representation.

The attention consists of three parts, namely, q, k, v obtained by the sequence representation, q'k'v' obtained by the interaction between the sequence representation and the rigid body group, and b , which is th bias, obtained from the pair representation (the second inputs – inputs_2d).

aij=Softmax(wl(c1qiTkj+bijc2TiqiTjkj2))

where i and j represent the ith and jth amino acids in the sequence, respectively, and T is the rotation and translation in the input.

Jumper et al. (2021) Suppl. Alg. 22 "InvariantPointAttention".

Parameters
  • num_head (int) – The number of the heads.

  • num_scalar_qk (int) – The number of the scalar query/key.

  • num_scalar_v (int) – The number of the scalar value.

  • num_point_v (int) – The number of the point value.

  • num_point_qk (int) – The number of the point query/key.

  • num_channel (int) – The number of the channel.

  • pair_dim (int) – The last dimension length of pair.

Inputs:
  • inputs_1d (Tensor) - The first row of msa representation which is the output of evoformer module, also called the sequence representation, shape [Nres,num_channel].

  • inputs_2d (Tensor) - The pair representation which is the output of evoformer module, shape [Nres,Nres,pair_dim].

  • mask (Tensor) - A mask that determines which elements of inputs_1d are involved in the attention calculation, shape [Nres,1]

  • rotation (tuple) - A rotation term in a rigid body group T(r,t), A tuple of length 9, The shape of each elements in the tuple is [Nres].

  • translation (tuple) - A translation term in a rigid body group T(r,t), A tuple of length 3, The shape of each elements in the tuple is [Nres].

Outputs:

Tensor, the update of inputs_1d, shape [Nres,channel].

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindsponge.cell import InvariantPointAttention
>>> from mindspore import dtype as mstype
>>> from mindspore import Tensor
>>> import mindspore.context as context
>>> context.set_context(mode=context.GRAPH_MODE)
>>> model = InvariantPointAttention(num_head=12, num_scalar_qk=16, num_scalar_v=16,
...                                 num_point_v=8, num_point_qk=4,
...                                 num_channel=384, pair_dim=128)
>>> inputs_1d = Tensor(np.ones((256, 384)), mstype.float32)
>>> inputs_2d = Tensor(np.ones((256, 256, 128)), mstype.float32)
>>> mask = Tensor(np.ones((256, 1)), mstype.float32)
>>> rotation = tuple([Tensor(np.ones(256), mstype.float16) for _ in range(9)])
>>> translation = tuple([Tensor(np.ones(256), mstype.float16) for _ in range(3)])
>>> attn_out = model(inputs_1d, inputs_2d, mask, rotation, translation)
>>> print(attn_out.shape)
(256, 384)