mindspore.dataset.CocoDataset
- class mindspore.dataset.CocoDataset(dataset_dir, annotation_file, task='Detection', num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None, extra_metadata=False, decrypt=None)[source]
COCO(Common Objects in Context) dataset.
CocoDataset supports five kinds of tasks, which are Object Detection, Keypoint Detection, Stuff Segmentation, Panoptic Segmentation and Captioning of 2017 Train/Val/Test dataset.
- Parameters
dataset_dir (str) – Path to the root directory that contains the dataset.
annotation_file (str) – Path to the annotation JSON file.
task (str, optional) – Set the task type for reading COCO data. Supported task types:
'Detection'
,'Stuff'
,'Panoptic'
,'Keypoint'
and'Captioning'
. Default:'Detection'
.num_samples (int, optional) – The number of images to be included in the dataset. Default:
None
, all images.num_parallel_workers (int, optional) – Number of worker threads to read the data. Default:
None
, will use global default workers(8), it can be set bymindspore.dataset.config.set_num_parallel_workers()
.shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default:
None
, expected order behavior shown in the table below.decode (bool, optional) – Decode the images after reading. Default:
False
.sampler (Sampler, optional) – Object used to choose samples from the dataset. Default:
None
, expected order behavior shown in the table below.num_shards (int, optional) – Number of shards that the dataset will be divided into. Default:
None
. When this argument is specified, num_samples reflects the maximum sample number of per shard.shard_id (int, optional) – The shard ID within num_shards . Default:
None
. This argument can only be specified when num_shards is also specified.cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default:
None
, which means no cache is used.extra_metadata (bool, optional) – Flag to add extra meta-data to row. If True, an additional column will be output at the end
[_meta-filename, dtype=string]
. Default:False
.decrypt (callable, optional) – Image decryption function, which accepts the path of the encrypted image file and returns the decrypted bytes data. Default:
None
, no decryption.
The generated dataset with different task setting has different output columns:
task
Output column
Detection
[image, dtype=uint8]
[bbox, dtype=float32]
[category_id, dtype=uint32]
[iscrowd, dtype=uint32]
Stuff
[image, dtype=uint8]
[segmentation, dtype=float32]
[iscrowd, dtype=uint32]
Keypoint
[image, dtype=uint8]
[keypoints, dtype=float32]
[num_keypoints, dtype=uint32]
Panoptic
[image, dtype=uint8]
[bbox, dtype=float32]
[category_id, dtype=uint32]
[iscrowd, dtype=uint32]
[area, dtype=uint32]
Captioning
[image, dtype=uint8]
[captions, dtype=string]
- Raises
RuntimeError – If dataset_dir does not contain data files.
RuntimeError – If sampler and shuffle are specified at the same time.
RuntimeError – If sampler and num_shards/shard_id are specified at the same time.
RuntimeError – If num_shards is specified but shard_id is None.
RuntimeError – If shard_id is specified but num_shards is None.
RuntimeError – If parse JSON file failed.
ValueError – If num_parallel_workers exceeds the max thread numbers.
ValueError – If task is not
'Detection'
,'Stuff'
,'Panoptic'
,'Keypoint'
or'Captioning'
.ValueError – If annotation_file is not exist.
ValueError – If dataset_dir is not exist.
ValueError – If shard_id is not in range of [0, num_shards ).
- Tutorial Examples:
Note
Column '[_meta-filename, dtype=string]' won't be output unless an explicit rename dataset op is added to remove the prefix('_meta-').
Not support
mindspore.dataset.PKSampler
for sampler parameter yet.The parameters num_samples , shuffle , num_shards , shard_id can be used to control the sampler used in the dataset, and their effects when combined with parameter sampler are as follows.
Parameter sampler
Parameter num_shards / shard_id
Parameter shuffle
Parameter num_samples
Sampler Used
mindspore.dataset.Sampler type
None
None
None
sampler
numpy.ndarray,list,tuple,int type
/
/
num_samples
SubsetSampler(indices = sampler , num_samples = num_samples )
iterable type
/
/
num_samples
IterSampler(sampler = sampler , num_samples = num_samples )
None
num_shards / shard_id
None / True
num_samples
DistributedSampler(num_shards = num_shards , shard_id = shard_id , shuffle = True , num_samples = num_samples )
None
num_shards / shard_id
False
num_samples
DistributedSampler(num_shards = num_shards , shard_id = shard_id , shuffle = False , num_samples = num_samples )
None
None
None / True
None
RandomSampler(num_samples = num_samples )
None
None
None / True
num_samples
RandomSampler(replacement = True , num_samples = num_samples )
None
None
False
num_samples
SequentialSampler(num_samples = num_samples )
Examples
>>> import mindspore.dataset as ds >>> coco_dataset_dir = "/path/to/coco_dataset_directory/images" >>> coco_annotation_file = "/path/to/coco_dataset_directory/annotation_file" >>> >>> # 1) Read COCO data for Detection task >>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir, ... annotation_file=coco_annotation_file, ... task='Detection') >>> >>> # 2) Read COCO data for Stuff task >>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir, ... annotation_file=coco_annotation_file, ... task='Stuff') >>> >>> # 3) Read COCO data for Panoptic task >>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir, ... annotation_file=coco_annotation_file, ... task='Panoptic') >>> >>> # 4) Read COCO data for Keypoint task >>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir, ... annotation_file=coco_annotation_file, ... task='Keypoint') >>> >>> # 5) Read COCO data for Captioning task >>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir, ... annotation_file=coco_annotation_file, ... task='Captioning') >>> >>> # In COCO dataset, each dictionary has keys "image" and "annotation"
About COCO dataset:
COCO(Microsoft Common Objects in Context) is a large-scale object detection, segmentation, and captioning dataset with several features: Object segmentation, Recognition in context, Superpixel stuff segmentation, 330K images (>200K labeled), 1.5 million object instances, 80 object categories, 91 stuff categories, 5 captions per image, 250,000 people with keypoints. In contrast to the popular ImageNet dataset, COCO has fewer categories but more instances in per category.
You can unzip the original COCO-2017 dataset files into this directory structure and read by MindSpore's API.
. └── coco_dataset_directory ├── train2017 │ ├── 000000000009.jpg │ ├── 000000000025.jpg │ ├── ... ├── test2017 │ ├── 000000000001.jpg │ ├── 000000058136.jpg │ ├── ... ├── val2017 │ ├── 000000000139.jpg │ ├── 000000057027.jpg │ ├── ... └── annotations ├── captions_train2017.json ├── captions_val2017.json ├── instances_train2017.json ├── instances_val2017.json ├── person_keypoints_train2017.json └── person_keypoints_val2017.json
Citation:
@article{DBLP:journals/corr/LinMBHPRDZ14, author = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and Lubomir D. Bourdev and Ross B. Girshick and James Hays and Pietro Perona and Deva Ramanan and Piotr Doll{'{a}}r and C. Lawrence Zitnick}, title = {Microsoft {COCO:} Common Objects in Context}, journal = {CoRR}, volume = {abs/1405.0312}, year = {2014}, url = {http://arxiv.org/abs/1405.0312}, archivePrefix = {arXiv}, eprint = {1405.0312}, timestamp = {Mon, 13 Aug 2018 16:48:13 +0200}, biburl = {https://dblp.org/rec/journals/corr/LinMBHPRDZ14.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
Pre-processing Operation
Apply a function in this dataset. |
|
Concatenate the dataset objects in the input list. |
|
Filter dataset by prediction. |
|
Map func to each row in dataset and flatten the result. |
|
Apply each operation in operations to this dataset. |
|
The specified columns will be selected from the dataset and passed into the pipeline with the order specified. |
|
Rename the columns in input datasets. |
|
Repeat this dataset count times. |
|
Reset the dataset for next epoch. |
|
Save the dynamic data processed by the dataset pipeline in common dataset format. |
|
Shuffle the dataset by creating a cache with the size of buffer_size . |
|
Skip the first N elements of this dataset. |
|
Split the dataset into smaller, non-overlapping datasets. |
|
Take the first specified number of samples from the dataset. |
|
Zip the datasets in the sense of input tuple of datasets. |
Batch
Combine batch_size number of consecutive rows into batch which apply per_batch_map to the samples first. |
|
Bucket elements according to their lengths. |
|
Combine batch_size number of consecutive rows into batch which apply pad_info to the samples first. |
Iterator
Create an iterator over the dataset that yields samples of type dict, while the key is the column name and the value is the data. |
|
Create an iterator over the dataset that yields samples of type list, whose elements are the data for each column. |
Attribute
Return the size of batch. |
|
Get the mapping dictionary from category names to category indexes. |
|
Return the names of the columns in dataset. |
|
Return the number of batches in an epoch. |
|
Get the replication times in RepeatDataset. |
|
Get the column index, which represents the corresponding relationship between the data column order and the network when using the sink mode. |
|
Get the number of classes in a dataset. |
|
Get the shapes of output data. |
|
Get the types of output data. |
Apply Sampler
Add a child sampler for the current dataset. |
|
Replace the last child sampler of the current dataset, remaining the parent sampler unchanged. |
Others
Release a blocking condition and trigger callback with given data. |
|
Add a blocking condition to the input Dataset and a synchronize action will be applied. |
|
Serialize a pipeline into JSON string and dump into file if filename is provided. |