Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindflow.cell.ViT

class mindflow.cell.ViT(image_size=(192, 384), in_channels=7, out_channels=3, patch_size=16, encoder_depths=12, encoder_embed_dim=768, encoder_num_heads=12, decoder_depths=8, decoder_embed_dim=512, decoder_num_heads=16, mlp_ratio=4, dropout_rate=1.0, compute_dtype=mstype.float16)[source]

This module based on ViT backbone which including encoder, decoding_embedding, decoder and dense layer.

Parameters
  • image_size (tuple[int]) – The image size of input. Default: (192, 384).

  • in_channels (int) – The input feature size of input. Default: 7.

  • out_channels (int) – The output feature size of output. Default: 3.

  • patch_size (int) – The patch size of image. Default: 16.

  • encoder_depths (int) – The encoder depth of encoder layer. Default: 12.

  • encoder_embed_dim (int) – The encoder embedding dimension of encoder layer. Default: 768.

  • encoder_num_heads (int) – The encoder heads’ number of encoder layer. Default: 12.

  • decoder_depths (int) – The decoder depth of decoder layer. Default: 8.

  • decoder_embed_dim (int) – The decoder embedding dimension of decoder layer. Default: 512.

  • decoder_num_heads (int) – The decoder heads’ number of decoder layer. Default: 16.

  • mlp_ratio (int) – The rate of mlp layer. Default: 4.

  • dropout_rate (float) – The rate of dropout layer. Default: 1.0.

  • compute_dtype (dtype) – The data type for encoder, decoding_embedding, decoder and dense layer. Default: mstype.float16.

Inputs:
  • input (Tensor) - Tensor of shape (batch_size,feature_size,image_height,image_width).

Outputs:
  • output (Tensor) - Tensor of shape (batch_size,patchify_size,embed_dim). where patchify_size = (image_height * image_width) / (patch_size * patch_size)

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import context
>>> from mindspore import dtype as mstype
>>> from mindflow.cell import ViT
>>> input_tensor = Tensor(np.ones((32, 3, 192, 384)), mstype.float32)
>>> print(input_tensor.shape)
(32, 3, 192, 384)
>>> model = ViT(in_channels=3,
>>>             out_channels=3,
>>>             encoder_depths=6,
>>>             encoder_embed_dim=768,
>>>             encoder_num_heads=12,
>>>             decoder_depths=6,
>>>             decoder_embed_dim=512,
>>>             decoder_num_heads=16,
>>>             )
>>> output_tensor = model(input_tensor)
>>> print(output_tensor.shape)
(32, 288, 768)