mindspore.dataset
At the heart of MindSpore data loading utility is the mindspore.dataset module. It is a dataset engine based on pipline design.
This module provides the following data loading methods to help users load datasets into MindSpore.
User defined dataset loading: allows users to define Random-accessible(Map-style) datasets or Iterable-style dataset to customize data reading and processing logic.
Standard format dataset loading: support loading dataset files in standard data formats, including MindRecord, TFRecord .
Open source dataset loading: supports reading Open source datasets , such as MNIST, CIFAR-10, CLUE, LJSpeech, etc.
In addition, this module also provides data sampler, transformations, batching, as well as basic configurations such as random seed, parallelism setting and other features, to be used in conjunction with the dataset loading.
Data Sampler: Provides various common Sampler, such as RandomSampler, DistributedSampler, etc.
Data Transformations: Provides multiple dataset operations to perform data augmentation, batching.
Basic Configuration: Provides pipeline configuration for random seed setting, parallelism setting, data recovery mode, etc.
Descriptions of common dataset terms are as follows:
Dataset, the base class of all the datasets. It provides data processing methods to help preprocess the data.
SourceDataset, an abstract class to represent the source of dataset pipeline which produces data from data sources such as files and databases.
MappableDataset, an abstract class to represent a source dataset which supports for random access.
Iterator, the base class of dataset iterator for enumerating elements.
Introduction to data processing pipeline
As shown in the above figure, the mindspore dataset module makes it easy for users to define data preprocessing pipelines and transform samples in the dataset in the most efficient (multi-process / multi-thread) manner. The specific steps are as follows:
Loading datasets: Users can easily load supported datasets using the *Dataset class, or load Python layer customized datasets through UDF Loader + GeneratorDataset . At the same time, the loading class method can accept a variety of parameters such as sampler, data slicing, and data shuffle;
Dataset operation: The user uses the dataset object method .shuffle / .filter / .skip / .split / .take / … to further shuffle, filter, skip, and obtain the maximum number of samples of datasets;
Dataset sample transform operation: The user can add data transform operations ( vision transform , NLP transform , audio transform ) to the map operation to perform transformations. During data preprocessing, multiple map operations can be defined to perform different transform operations to different fields. The data transform operation can also be a user-defined transform pyfunc (Python function);
Batch: After the transformation of the samples, the user can use the batch operation to organize multiple samples into batches, or use self-defined batch logic with the parameter per_batch_map applied;
Iterator: Finally, the user can use the dataset object method create_dict_iterator to create an iterator, which can output the preprocessed data cyclically.
Quick start of Dataset Pipeline
For a quick start of using Dataset Pipeline, download Load & Process Data With Dataset Pipeline to local and run in sequence.
User Defined
A source dataset that generates data from Python by invoking Python data source each epoch. |
Standard Format
A source dataset that reads and parses MindRecord dataset. |
|
A source dataset that reads and parses MindRecord dataset which stored in cloud storage such as OBS, Minio or AWS S3. |
|
A source dataset that reads and parses datasets stored on disk in TFData format. |
Open Source
Vision
Caltech 101 dataset. |
|
Caltech 256 dataset. |
|
CelebA(CelebFaces Attributes) dataset. |
|
CIFAR-10 dataset. |
|
CIFAR-100 dataset. |
|
Cityscapes dataset. |
|
COCO(Common Objects in Context) dataset. |
|
DIV2K(DIVerse 2K resolution image) dataset. |
|
EMNIST(Extended MNIST) dataset. |
|
A source dataset for generating fake images. |
|
Fashion-MNIST dataset. |
|
Flickr8k and Flickr30k datasets. |
|
Oxfird 102 Flower dataset. |
|
Food101 dataset. |
|
A source dataset that reads images from a tree of directories. |
|
KITTI dataset. |
|
KMNIST(Kuzushiji-MNIST) dataset. |
|
LFW(Labeled Faces in the Wild) dataset. |
|
LSUN(Large-scale Scene UNderstarding) dataset. |
|
A source dataset for reading images from a Manifest file. |
|
MNIST dataset. |
|
Omniglot dataset. |
|
PhotoTour dataset. |
|
Places365 dataset. |
|
QMNIST dataset. |
|
RenderedSST2(Rendered Stanford Sentiment Treebank v2) dataset. |
|
SB(Semantic Boundaries) Dataset. |
|
SBU(SBU Captioned Photo) dataset. |
|
Semeion dataset. |
|
STL-10 dataset. |
|
SUN397(Scene UNderstanding) dataset. |
|
SVHN(Street View House Numbers) dataset. |
|
USPS(U.S. |
|
VOC(Visual Object Classes) dataset. |
|
WIDERFace dataset. |
Text
AG News dataset. |
|
Amazon Review Polarity and Amazon Review Full datasets. |
|
CLUE(Chinese Language Understanding Evaluation) dataset. |
|
A source dataset that reads and parses comma-separated values (CSV) files as dataset. |
|
CoNLL-2000(Conference on Computational Natural Language Learning) chunking dataset. |
|
DBpedia dataset. |
|
EnWik9 dataset. |
|
IMDb(Internet Movie Database) dataset. |
|
IWSLT2016(International Workshop on Spoken Language Translation) dataset. |
|
IWSLT2017(International Workshop on Spoken Language Translation) dataset. |
|
Multi30k dataset. |
|
PennTreebank dataset. |
|
Sogou News dataset. |
|
SQuAD 1.1 and SQuAD 2.0 datasets. |
|
SST2(Stanford Sentiment Treebank v2) dataset. |
|
A source dataset that reads and parses datasets stored on disk in text format. |
|
UDPOS(Universal Dependencies dataset for Part of Speech) dataset. |
|
WikiText2 and WikiText103 datasets. |
|
YahooAnswers dataset. |
|
Yelp Review Polarity and Yelp Review Full datasets. |
Audio
CMU Arctic dataset. |
|
GTZAN dataset. |
|
LibriTTS dataset. |
|
LJSpeech dataset. |
|
Speech Commands dataset. |
|
Tedlium dataset. |
|
YesNo dataset. |
Others
Creates a dataset with given data slices, mainly for loading Python data into dataset. |
|
Creates a dataset with filler data provided by user. |
|
A source dataset that generates random data. |
Sampler
A sampler that accesses a shard of the dataset, it helps divide dataset into multi-subset for distributed training. |
|
Samples K elements for each P class in the dataset. |
|
Samples the elements randomly. |
|
Samples the dataset elements sequentially that is equivalent to not using a sampler. |
|
Samples the elements randomly from a sequence of indices. |
|
Samples the elements from a sequence of indices. |
|
Samples the elements from [0, len(weights) - 1] randomly with the given weights (probabilities). |
Config
The configuration module provides various functions to set and get the supported configuration parameters, and read a configuration file.
Set the upper limit on the number of batches of data that the Host can send to the Device. |
|
Load the project configuration from the file. |
|
Set the seed for the random number generator in data pipeline. |
|
Get random number seed. |
|
Set the buffer queue size between dataset operations in the pipeline. |
|
Get the prefetch size as for number of rows. |
|
Set a new global configuration default value for the number of parallel workers. |
|
Get the global configuration of number of parallel workers. |
|
Set the default state of numa enabled. |
|
Get the state of numa to indicate enabled/disabled. |
|
Set the default interval (in milliseconds) for monitor sampling. |
|
Get the global configuration of sampling interval of performance monitor. |
|
Set the default timeout (in seconds) for |
|
Get the default timeout (in seconds) for |
|
Set num_parallel_workers for each op automatically(This feature is turned off by default). |
|
Get the setting (turned on or off) automatic number of workers, it is disabled by default. |
|
Set whether to use shared memory for interprocess communication when data processing multiprocessing is turned on. |
|
Get the default state of shared mem enabled variable. |
|
Set whether to enable AutoTune for data pipeline parameters. |
|
Get whether AutoTune is currently enabled, it is disabled by default. |
|
Set the configuration adjustment interval (in steps) for AutoTune. |
|
Get the current configuration adjustment interval (in steps) for AutoTune. |
|
Set the automatic offload flag of the dataset. |
|
Get the state of the automatic offload flag (True or False), it is disabled by default. |
|
Set the default state of watchdog Python thread as enabled, the default state of watchdog Python thread is enabled. |
|
Get the state of watchdog Python thread to indicate enabled or disabled state. |
|
Set whether dataset pipeline should recover in fast mode during failover (In fast mode, random augmentations may not get same results as before the failure occurred). |
|
Get whether the fast recovery mode is enabled for the current dataset pipeline. |
|
|
Set the default interval (in seconds) for multiprocessing/multithreading timeout when main process/thread gets data from subprocesses/child threads. |
|
Get the global configuration of multiprocessing/multithreading timeout when main process/thread gets data from subprocesses/child threads. |
Set the method in which erroneous samples should be processed in a dataset pipeline. |
|
Get the current configuration for strategy for processing erroneous samples in a dataset pipeline. |
|
An enumeration for error_samples_mode . |
|
Set the debug_mode flag of the dataset pipeline. |
|
Get whether debug mode is currently enabled for the data pipeline. |
Tools
This class helps to get dataset information dynamically when the input of batch_size or per_batch_map in batch operation is a callable object. |
|
A client to interface with tensor caching service. |
|
Abstract base class used to build dataset callback classes. |
|
Class to represent a schema of a dataset. |
|
Specify the shuffle mode. |
|
Abstract base class used to build dataset callback classes that are synchronized with the training callback class mindspore.train.Callback . |
|
Compare if two dataset pipelines are the same. |
|
The base class for Dataset Pipeline Python Debugger hook. |
|
Construct dataset pipeline from a JSON file produced by dataset serialize function. |
|
Serialize dataset pipeline into a JSON file. |
|
Write the dataset pipeline graph to logger.info file. |
|
Wait util the dataset files required by all devices are downloaded. |
|
Draw an image with given bboxes and class labels (with scores). |
|
Line-based file reader. |