mindspore.dataset
This module provides APIs to load and process various common datasets such as MNIST, CIFAR-10, CIFAR-100, VOC, COCO, ImageNet, CelebA, CLUE, etc. It also supports datasets in standard format, including MindRecord, TFRecord, Manifest, etc. Users can also define their own datasets with this module.
Besides, this module provides APIs to sample data while loading.
We can enable cache in most of the dataset with its key arguments ‘cache’. Please notice that cache is not supported on Windows platform yet. Do not use it while loading and processing data on Windows. More introductions and limitations can refer Single-Node Tensor Cache.
Common imported modules in corresponding API examples are as follows:
import mindspore.dataset as ds
from mindspore.dataset.transforms import c_transforms
Vision
A source dataset for reading and parsing CelebA dataset. |
|
A source dataset for reading and parsing Cifar100 dataset. |
|
A source dataset for reading and parsing Cifar10 dataset. |
|
A source dataset for reading and parsing COCO dataset. |
|
A source dataset that reads images from a tree of directories. |
|
A source dataset for reading and parsing the MNIST dataset. |
|
A source dataset for reading and parsing VOC dataset. |
Text
A source dataset that reads and parses CLUE datasets. |
Graph
Reads the graph dataset used for GNN training from the shared file and database. |
Standard Format
A source dataset that reads and parses comma-separated values (CSV) datasets. |
|
A source dataset for reading images from a Manifest file. |
|
A source dataset for reading and parsing MindRecord dataset. |
|
A source dataset that reads and parses datasets stored on disk in text format. |
|
A source dataset for reading and parsing datasets stored on disk in TFData format. |
User Defined
A source dataset that generates data from Python by invoking Python data source each epoch. |
|
Creates a dataset with given data slices, mainly for loading Python data into dataset. |
|
Creates a dataset with filler data provided by user. |
Sampler
A sampler that accesses a shard of the dataset, it helps divide dataset into multi-subset for distributed training. |
|
Samples K elements for each P class in the dataset. |
|
Samples the elements randomly. |
|
Samples the dataset elements sequentially that is equivalent to not using a sampler. |
|
Samples the elements randomly from a sequence of indices. |
|
Samples the elements from a sequence of indices. |
|
Samples the elements from [0, len(weights) - 1] randomly with the given weights (probabilities). |
Others
A client to interface with tensor caching service. |
|
Abstract base class used to build a dataset callback class. |
|
Class to represent a schema of a dataset. |
|
Abstract base class used to build a dataset callback class that is synchronized with the training callback. |
|
Compare if two dataset pipelines are the same. |
|
Construct dataset pipeline from a JSON file produced by de.serialize(). |
|
Serialize dataset pipeline into a JSON file. |
|
Write the dataset pipeline graph to logger.info file. |
|
Draw an image with given bboxes and class labels (with scores). |
|
Zip the datasets in the input tuple of datasets. |