mindspore.dataset.YahooAnswersDataset
- class mindspore.dataset.YahooAnswersDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=Shuffle.GLOBAL, num_shards=None, shard_id=None, cache=None)[source]
YahooAnswers dataset.
The generated dataset has four columns
[class, title, content, answer]
, whose data type is string.- Parameters
dataset_dir (str) – Path to the root directory that contains the dataset.
usage (str, optional) – Usage of this dataset, can be
'train'
,'test'
or'all'
.'train'
will read from 1,400,000 train samples,'test'
will read from 60,000 test samples,'all'
will read from all 1,460,000 samples. Default:None
, all samples.num_samples (int, optional) – The number of samples to be included in the dataset. Default:
None
, will include all text.num_parallel_workers (int, optional) – Number of worker threads to read the data. Default:
None
, will use global default workers(8), it can be set bymindspore.dataset.config.set_num_parallel_workers()
.shuffle (Union[bool, Shuffle], optional) –
Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default:
Shuffle.GLOBAL
. If shuffle isFalse
, no shuffling will be performed. If shuffle isTrue
, it is equivalent to setting shuffle tomindspore.dataset.Shuffle.GLOBAL
. Set the mode of data shuffling by passing in enumeration variables:Shuffle.GLOBAL
: Shuffle both the files and samples.Shuffle.FILES
: Shuffle files only.
num_shards (int, optional) – Number of shards that the dataset will be divided into. Default:
None
. When this argument is specified, num_samples reflects the maximum sample number of per shard.shard_id (int, optional) – The shard ID within num_shards . Default:
None
. This argument can only be specified when num_shards is also specified.cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default:
None
, which means no cache is used.
- Raises
RuntimeError – If dataset_dir does not contain data files.
RuntimeError – If num_shards is specified but shard_id is None.
RuntimeError – If shard_id is specified but num_shards is None.
ValueError – If shard_id is not in range of [0, num_shards ).
ValueError – If num_parallel_workers exceeds the max thread numbers.
- Tutorial Examples:
Examples
>>> import mindspore.dataset as ds >>> yahoo_answers_dataset_dir = "/path/to/yahoo_answers_dataset_directory" >>> >>> # 1) Read 3 samples from YahooAnswers dataset >>> dataset = ds.YahooAnswersDataset(dataset_dir=yahoo_answers_dataset_dir, num_samples=3) >>> >>> # 2) Read train samples from YahooAnswers dataset >>> dataset = ds.YahooAnswersDataset(dataset_dir=yahoo_answers_dataset_dir, usage="train")
About YahooAnswers dataset:
The YahooAnswers dataset consists of 630,000 text samples in 10 classes, There are 560,000 samples in the train.csv and 70,000 samples in the test.csv. The 10 different classes represent Society & Culture, Science & Mathematics, Health, Education & Reference, Computers & Internet, Sports, Business & Finance, Entertainment & Music, Family & Relationships, Politics & Government.
Here is the original YahooAnswers dataset structure. You can unzip the dataset files into this directory structure and read by Mindspore’s API.
. └── yahoo_answers_dataset_dir ├── train.csv ├── test.csv ├── classes.txt └── readme.txt
Citation:
@article{YahooAnswers, title = {Yahoo! Answers Topic Classification Dataset}, author = {Xiang Zhang}, year = {2015}, howpublished = {} }
Pre-processing Operation
Apply a function in this dataset. |
|
Concatenate the dataset objects in the input list. |
|
Filter dataset by prediction. |
|
Map func to each row in dataset and flatten the result. |
|
Apply each operation in operations to this dataset. |
|
The specified columns will be selected from the dataset and passed into the pipeline with the order specified. |
|
Rename the columns in input datasets. |
|
Repeat this dataset count times. |
|
Reset the dataset for next epoch. |
|
Save the dynamic data processed by the dataset pipeline in common dataset format. |
|
Shuffle the dataset by creating a cache with the size of buffer_size . |
|
Skip the first N elements of this dataset. |
|
Split the dataset into smaller, non-overlapping datasets. |
|
Takes at most given numbers of elements from the dataset. |
|
Zip the datasets in the sense of input tuple of datasets. |
Batch
Combine batch_size number of consecutive rows into batch which apply per_batch_map to the samples first. |
|
Bucket elements according to their lengths. |
|
Combine batch_size number of consecutive rows into batch which apply pad_info to the samples first. |
Iterator
Create an iterator over the dataset. |
|
Create an iterator over the dataset. |
Attribute
Return the size of batch. |
|
Return the class index. |
|
Return the names of the columns in dataset. |
|
Return the number of batches in an epoch. |
|
Get the replication times in RepeatDataset. |
|
Get the column index, which represents the corresponding relationship between the data column order and the network when using the sink mode. |
|
Get the number of classes in a dataset. |
|
Get the shapes of output data. |
|
Get the types of output data. |
Apply Sampler
Add a child sampler for the current dataset. |
|
Replace the last child sampler of the current dataset, remaining the parent sampler unchanged. |
Others
Return a transferred Dataset that transfers data through a device. |
|
Release a blocking condition and trigger callback with given data. |
|
Add a blocking condition to the input Dataset and a synchronize action will be applied. |
|
Serialize a pipeline into JSON string and dump into file if filename is provided. |