mindspore.dataset.SST2Dataset
- class mindspore.dataset.SST2Dataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=Shuffle.GLOBAL, num_shards=None, shard_id=None, cache=None)[source]
SST2(Stanford Sentiment Treebank v2) dataset.
The generated dataset's train.tsv and dev.tsv have two columns
[sentence, label]
. The generated dataset's test.tsv has one column[sentence]
. The tensor of columnsentence
andlabel
are of the string type.- Parameters
dataset_dir (str) – Path to the root directory that contains the dataset.
usage (str, optional) – Usage of this dataset, can be
"train"
,"test"
or"dev"
."train"
will read from 67,349 train samples,"test"
will read from 1,821 test samples,"dev"
will read from all 872 samples. Default:None
, will read train samples.num_samples (int, optional) – The number of samples to be included in the dataset. Default:
None
, will include all text.num_parallel_workers (int, optional) – Number of worker threads to read the data. Default:
None
, will use global default workers(8), it can be set bymindspore.dataset.config.set_num_parallel_workers()
.shuffle (Union[bool, Shuffle], optional) –
Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default:
Shuffle.GLOBAL
. If shuffle isFalse
, no shuffling will be performed; If shuffle isTrue
, the behavior is the same as setting shuffle to be Shuffle.GLOBAL Set the mode of data shuffling by passing in enumeration variables:Shuffle.GLOBAL
: Shuffle the samples.
num_shards (int, optional) – Number of shards that the dataset will be divided into. Default:
None
. When this argument is specified, num_samples reflects the maximum sample number of per shard.shard_id (int, optional) – The shard ID within num_shards. This argument can only be specified when num_shards is also specified. Default:
None
.cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default:
None
, which means no cache is used.
- Raises
RuntimeError – If dataset_dir does not contain data files.
ValueError – If num_parallel_workers exceeds the max thread numbers.
RuntimeError – If num_shards is specified but shard_id is None.
RuntimeError – If shard_id is specified but num_shards is None.
ValueError – If shard_id is not in range of [0, num_shards ).
- Tutorial Examples:
Examples
>>> import mindspore.dataset as ds >>> sst2_dataset_dir = "/path/to/sst2_dataset_directory" >>> >>> # 1) Read 3 samples from SST2 dataset >>> dataset = ds.SST2Dataset(dataset_dir=sst2_dataset_dir, num_samples=3) >>> >>> # 2) Read train samples from SST2 dataset >>> dataset = ds.SST2Dataset(dataset_dir=sst2_dataset_dir, usage="train")
About SST2 dataset: The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005) and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges.
Here is the original SST2 dataset structure. You can unzip the dataset files into this directory structure and read by Mindspore's API.
. └── sst2_dataset_dir ├── train.tsv ├── test.tsv ├── dev.tsv └── original
Citation:
@inproceedings{socher-etal-2013-recursive, title = {Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank}, author = {Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D. and Ng, Andrew and Potts, Christopher}, booktitle = {Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing}, month = oct, year = {2013}, address = {Seattle, Washington, USA}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/D13-1170}, pages = {1631--1642}, }
Pre-processing Operation
Apply a function in this dataset. |
|
Concatenate the dataset objects in the input list. |
|
Filter dataset by prediction. |
|
Map func to each row in dataset and flatten the result. |
|
Apply each operation in operations to this dataset. |
|
The specified columns will be selected from the dataset and passed into the pipeline with the order specified. |
|
Rename the columns in input datasets. |
|
Repeat this dataset count times. |
|
Reset the dataset for next epoch. |
|
Save the dynamic data processed by the dataset pipeline in common dataset format. |
|
Shuffle the dataset by creating a cache with the size of buffer_size . |
|
Skip the first N elements of this dataset. |
|
Split the dataset into smaller, non-overlapping datasets. |
|
Take the first specified number of samples from the dataset. |
|
Zip the datasets in the sense of input tuple of datasets. |
Batch
Combine batch_size number of consecutive rows into batch which apply per_batch_map to the samples first. |
|
Bucket elements according to their lengths. |
|
Combine batch_size number of consecutive rows into batch which apply pad_info to the samples first. |
Iterator
Create an iterator over the dataset that yields samples of type dict, while the key is the column name and the value is the data. |
|
Create an iterator over the dataset that yields samples of type list, whose elements are the data for each column. |
Attribute
Return the size of batch. |
|
Get the mapping dictionary from category names to category indexes. |
|
Return the names of the columns in dataset. |
|
Return the number of batches in an epoch. |
|
Get the replication times in RepeatDataset. |
|
Get the column index, which represents the corresponding relationship between the data column order and the network when using the sink mode. |
|
Get the number of classes in a dataset. |
|
Get the shapes of output data. |
|
Get the types of output data. |
Apply Sampler
Add a child sampler for the current dataset. |
|
Replace the last child sampler of the current dataset, remaining the parent sampler unchanged. |
Others
Release a blocking condition and trigger callback with given data. |
|
Add a blocking condition to the input Dataset and a synchronize action will be applied. |
|
Serialize a pipeline into JSON string and dump into file if filename is provided. |