mindspore.dataset.DistributedSampler
- class mindspore.dataset.DistributedSampler(num_shards, shard_id, shuffle=True, num_samples=None, offset=- 1)[source]
A sampler that accesses a shard of the dataset, it helps divide dataset into multi-subset for distributed training.
Note
The shuffling modes supported for different datasets are as follows:
List of support for shuffling mode Shuffling Mode
MindDataset
TFRecordDataset
Others
Shuffle.ADAPTIVE
Supported
Not Supported
Not Supported
Shuffle.GLOBAL
Supported
Supported
Supported
Shuffle.PARTIAL
Supported
Not Supported
Not Supported
Shuffle.FILES
Supported
Supported
Not Supported
Shuffle.INFILE
Supported
Not Supported
Not Supported
- Parameters
num_shards (int) – Number of shards to divide the dataset into.
shard_id (int) – Shard ID of the current shard, which should within the range of [0, num_shards - 1].
shuffle (Union[bool, Shuffle], optional) –
Specify the shuffle mode. Default:
True
, performsmindspore.dataset.Shuffle.GLOBAL
. If shuffle isFalse
, no shuffling will be performed. There are several levels of shuffling, desired shuffle enum defined bymindspore.dataset.Shuffle
.Shuffle.ADAPTIVE
: When the number of dataset samples is less than or equal to 100 million,Shuffle.GLOBAL
is used. When the number of dataset samples is greater than 100 million,Shuffle.PARTIAL
is used. The shuffle is performed once every 1 million samples.Shuffle.GLOBAL
: Global shuffle of all rows of data in dataset. The memory usage is large.Shuffle.PARTIAL
: Partial shuffle of data in dataset for every 1 million samples. The memory usage is less thanShuffle.GLOBAL
.Shuffle.FILES
: Shuffle the file sequence but keep the order of data within each file.Shuffle.INFILE
: Keep the file sequence the same but shuffle the data within each file.
num_samples (int, optional) – The number of samples to draw. Default:
None
, which means sample all elements.offset (int, optional) – The starting shard ID where the elements in the dataset are sent to, which should be no more than num_shards . This parameter is only valid when a ConcatDataset takes a
mindspore.dataset.DistributedSampler
as its sampler. It will affect the number of samples of per shard. Default:-1
, which means each shard has the same number of samples.
- Raises
TypeError – If num_shards is not of type int.
TypeError – If shard_id is not of type int.
TypeError – If shuffle is not of type bool or Shuffle.
TypeError – If num_samples is not of type int.
TypeError – If offset is not of type int.
ValueError – If num_samples is a negative value.
RuntimeError – If num_shards is not a positive value.
RuntimeError – If shard_id is smaller than 0 or equal to num_shards or larger than num_shards .
RuntimeError – If offset is greater than num_shards .
Examples
>>> import mindspore.dataset as ds >>> # creates a distributed sampler with 10 shards in total. This shard is shard 5. >>> sampler = ds.DistributedSampler(10, 5) >>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir, ... num_parallel_workers=8, ... sampler=sampler)
- add_child(sampler)
Add a sub-sampler for given sampler. The parent will receive all data from the output of sub-sampler sampler and apply its sample logic to return new samples.
Note
If a child sampler is added and it has a shuffle option, its value cannot be
Shuffle.PARTIAL
. Additionally, the parent sampler's shuffle value must beShuffle.GLOBAL
.
- Parameters
sampler (Sampler) – Object used to choose samples from the dataset. Only builtin samplers(
mindspore.dataset.DistributedSampler
,mindspore.dataset.PKSampler
,mindspore.dataset.RandomSampler
,mindspore.dataset.SequentialSampler
,mindspore.dataset.SubsetRandomSampler
,mindspore.dataset.WeightedRandomSampler
) are supported.
Examples
>>> import mindspore.dataset as ds >>> sampler = ds.SequentialSampler(start_index=0, num_samples=3) >>> sampler.add_child(ds.RandomSampler(num_samples=4)) >>> dataset = ds.Cifar10Dataset(cifar10_dataset_dir, sampler=sampler)
- get_child()
Get the child sampler of given sampler.
- Returns
Sampler, The child sampler of given sampler.
Examples
>>> import mindspore.dataset as ds >>> sampler = ds.SequentialSampler(start_index=0, num_samples=3) >>> sampler.add_child(ds.RandomSampler(num_samples=2)) >>> child_sampler = sampler.get_child()
- get_num_samples()
Get num_samples value of the current sampler instance. This parameter can be optionally passed in when defining the Sampler. Default:
None
. This method will return the num_samples value. If the current sampler has child samplers, it will continue to access the child samplers and process the obtained value according to certain rules.The following table shows the various possible combinations, and the final results returned.
child sampler
num_samples
child_samples
result
T
x
y
min(x, y)
T
x
None
x
T
None
y
y
T
None
None
None
None
x
n/a
x
None
None
n/a
None
- Returns
int, the number of samples, or None.
Examples
>>> import mindspore.dataset as ds >>> sampler = ds.SequentialSampler(start_index=0, num_samples=3) >>> num_samplers = sampler.get_num_samples()