mindspore.dataset.vision.PadToSize
- class mindspore.dataset.vision.PadToSize(size, offset=None, fill_value=0, padding_mode=Border.CONSTANT)[source]
Pad the image to a fixed size.
- Parameters
size (Union[int, Sequence[int, int]]) – The target size to pad. If int is provided, pad the image to [size, size]. If Sequence[int, int] is provided, it should be in order of [height, width].
offset (Union[int, Sequence[int, int]], optional) – The lengths to pad on the top and left. If int is provided, pad both top and left borders with this value. If Sequence[int, int] is provided, is should be in order of [top, left]. Default: None, means to pad symmetrically, keeping the original image in center.
fill_value (Union[int, tuple[int, int, int]], optional) – Pixel value used to pad the borders, only valid when padding_mode is Border.CONSTANT. If int is provided, it will be used for all RGB channels. If tuple[int, int, int] is provided, it will be used for R, G, B channels respectively. Default: 0.
padding_mode (Border, optional) –
Method of padding. It can be Border.CONSTANT, Border.EDGE, Border.REFLECT or Border.SYMMETRIC. Default: Border.CONSTANT.
Border.CONSTANT, pads with a constant value.
Border.EDGE, pads with the last value at the edge of the image.
Border.REFLECT, pads with reflection of the image omitting the last value on the edge.
Border.SYMMETRIC, pads with reflection of the image repeating the last value on the edge.
- Raises
TypeError – If size is not of type int or Sequence[int, int].
TypeError – If offset is not of type int or Sequence[int, int].
TypeError – If fill_value is not of type int or tuple[int, int, int].
TypeError – If padding_mode is not of type
mindspore.dataset.vision.Border
.ValueError – If size is not positive.
ValueError – If offset is negative.
ValueError – If fill_value is not in range of [0, 255].
RuntimeError – If shape of the input image is not <H, W> or <H, W, C>.
- Supported Platforms:
CPU
Examples
>>> transforms_list = [vision.Decode(), vision.PadToSize([256, 256])] >>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list, ... input_columns=["image"])