mindspore.dataset.transforms.Compose
- class mindspore.dataset.transforms.Compose(transforms)[源代码]
将多个数据增强操作组合使用。
说明
Compose可以将 mindspore.dataset.transforms / mindspore.dataset.vision 等模块中的数据增强操作以及用户自定义的Python可调用对象 合并成单个数据增强。对于用户定义的Python可调用对象,要求其返回值是numpy.ndarray类型。
- 参数:
transforms (list) - 一个数据增强的列表。
- 异常:
TypeError - 参数 transforms 类型不为list。
ValueError - 参数 transforms 是空的list。
TypeError - 参数 transforms 的元素不是Python的可调用对象或audio/text/transforms/vision模块中的数据增强方法。
- 支持平台:
CPU
样例:
>>> import numpy as np >>> import mindspore.dataset as ds >>> import mindspore.dataset.transforms as transforms >>> import mindspore.dataset.vision as vision >>> from mindspore.dataset.transforms import Relational >>> >>> # Use the transform in dataset pipeline mode >>> # create a dataset that reads all files in dataset_dir with 8 threads >>> data = np.random.randint(0, 255, size=(1, 100, 100, 3)).astype(np.uint8) >>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["image"]) >>> >>> # create a list of transformations to be applied to the image data >>> transform = transforms.Compose([ ... vision.RandomHorizontalFlip(0.5), ... vision.ToTensor(), ... vision.Normalize((0.491, 0.482, 0.447), (0.247, 0.243, 0.262), is_hwc=False), ... vision.RandomErasing()]) >>> # apply the transform to the dataset through dataset.map function >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transform, input_columns=["image"]) >>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True): ... print(item["image"].shape, item["image"].dtype) ... break (3, 100, 100) float32 >>> >>> # Compose is also be invoked implicitly, by just passing in a list of ops >>> # the above example then becomes: >>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["image"]) >>> transforms_list = [vision.RandomHorizontalFlip(0.5), ... vision.ToTensor(), ... vision.Normalize((0.491, 0.482, 0.447), (0.247, 0.243, 0.262), is_hwc=False), ... vision.RandomErasing()] >>> >>> # apply the transform to the dataset through dataset.map() >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms_list, input_columns=["image"]) >>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True): ... print(item["image"].shape, item["image"].dtype) ... break (3, 100, 100) float32 >>> >>> # Certain C++ and Python ops can be combined, but not all of them >>> # An example of combined operations >>> arr = [0, 1] >>> numpy_slices_dataset = ds.NumpySlicesDataset(arr, column_names=["cols"], shuffle=False) >>> transformed_list = [transforms.OneHot(2), ... transforms.Mask(transforms.Relational.EQ, 1)] >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transformed_list, input_columns=["cols"]) >>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True): ... print(item["cols"].shape, item["cols"].dtype) ... break (2,) bool >>> >>> # Here is an example of mixing vision ops >>> op_list=[vision.Resize((224, 244)), ... vision.ToPIL(), ... np.array, # need to convert PIL image to a NumPy array to pass it to C++ operation ... vision.Resize((24, 24))] >>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["image"]) >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=op_list, input_columns=["image"]) >>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True): ... print(item["image"].shape, item["image"].dtype) ... break (24, 24, 3) uint8 >>> >>> # Use the transform in eager mode >>> data = np.array([1, 2, 3]) >>> output = transforms.Compose([transforms.Fill(10), transforms.Mask(Relational.EQ, 100)])(data) >>> print(output.shape, output.dtype) (3,) bool