mindspore.nn.probability
mindspore.nn.probability.bijector
Bijectors are the high-level components used to construct the probabilistic network.
- class mindspore.nn.probability.bijector.Bijector(is_constant_jacobian=False, is_injective=True, name=None, dtype=None, param=None)[source]
Bijecotr class.
- Parameters
is_constant_jacobian (bool) – Whether the Bijector has constant derivative. Default: False.
is_injective (bool) – Whether the Bijector is a one-to-one mapping. Default: True.
name (str) – The name of the Bijector. Default: None.
dtype (mindspore.dtype) – The type of the distributions that the Bijector can operate on. Default: None.
param (dict) – The parameters used to initialize the Bijector. Default: None.
- construct(name, *args, **kwargs)[source]
Override construct in Cell.
Note
Names of supported functions include: ‘forward’, ‘inverse’, ‘forward_log_jacobian’, and ‘inverse_log_jacobian’.
- forward(*args, **kwargs)[source]
Forward transformation: transform the input value to another distribution.
- forward_log_jacobian(*args, **kwargs)[source]
Logarithm of the derivative of the forward transformation.
- class mindspore.nn.probability.bijector.Exp(name='Exp')[source]
Exponential Bijector. This Bijector performs the operation:
\[Y = exp(x).\]- Parameters
name (str) – The name of the Bijector. Default: ‘Exp’.
Examples
>>> # To initialize an Exp bijector. >>> import mindspore.nn.probability.bijector as msb >>> n = msb.Exp() >>> >>> # To use an Exp bijector in a network. >>> class net(Cell): >>> def __init__(self): >>> super(net, self).__init__(): >>> self.e1 = msb.Exp() >>> >>> def construct(self, value): >>> # Similar calls can be made to other functions >>> # by replacing `forward` by the name of the function. >>> ans1 = self.s1.forward(value) >>> ans2 = self.s1.inverse(value) >>> ans3 = self.s1.forward_log_jacobian(value) >>> ans4 = self.s1.inverse_log_jacobian(value)
- class mindspore.nn.probability.bijector.PowerTransform(power=0, name='PowerTransform', param=None)[source]
Power Bijector. This Bijector performs the operation:
\[Y = g(X) = (1 + X * c)^{1 / c}, X >= -1 / c\]where c >= 0 is the power.
The power transform maps inputs from [-1/c, inf] to [0, inf].
This Bijector is equivalent to the Exp bijector when c=0.
- Raises
ValueError – When the power is less than 0 or is not known statically.
- Parameters
name (str) – The name of the bijector. Default: ‘PowerTransform’.
param (dict) – The parameters used to initialize the bijector. These parameters are only used when other Bijectors inherit from powertransform to pass in parameters. In this case the derived Bijector may overwrite the argument param. Default: None.
Examples
>>> # To initialize a PowerTransform bijector of power 0.5. >>> import mindspore.nn.probability.bijector as msb >>> n = msb.PowerTransform(0.5) >>> >>> # To use a PowerTransform bijector in a network. >>> class net(Cell): >>> def __init__(self): >>> super(net, self).__init__(): >>> self.p1 = msb.PowerTransform(0.5) >>> >>> def construct(self, value): >>> # Similar calls can be made to other functions >>> # by replacing 'forward' by the name of the function. >>> ans1 = self.s1.forward(value) >>> ans2 = self.s1.inverse(value) >>> ans3 = self.s1.forward_log_jacobian(value) >>> ans4 = self.s1.inverse_log_jacobian(value)
- class mindspore.nn.probability.bijector.ScalarAffine(scale=1.0, shift=0.0, name='ScalarAffine')[source]
Scalar Affine Bijector. This Bijector performs the operation:
\[Y = a * X + b\]where a is the scale factor and b is the shift factor.
- Parameters
Examples
>>> # To initialize a ScalarAffine bijector of scale 1 and shift 2. >>> scalaraffine = nn.probability.bijector.ScalarAffine(1, 2) >>> >>> # To use a ScalarAffine bijector in a network. >>> class net(Cell): >>> def __init__(self): >>> super(net, self).__init__(): >>> self.s1 = nn.probability.bijector.ScalarAffine(1, 2) >>> >>> def construct(self, value): >>> # Similar calls can be made to other functions >>> # by replacing 'forward' by the name of the function. >>> ans1 = self.s1.forward(value) >>> ans2 = self.s1.inverse(value) >>> ans3 = self.s1.forward_log_jacobian(value) >>> ans4 = self.s1.inverse_log_jacobian(value)
- class mindspore.nn.probability.bijector.Softplus(sharpness=1.0, name='Softplus')[source]
Softplus Bijector. This Bijector performs the operation:
\[Y = \frac{\log(1 + e ^ {kX})}{k}\]where k is the sharpness factor.
- Parameters
Examples
>>> # To initialize a Softplus bijector of sharpness 2. >>> softplus = nn.probability.bijector.Softfplus(2) >>> >>> # To use ScalarAffine bijector in a network. >>> class net(Cell): >>> def __init__(self): >>> super(net, self).__init__(): >>> self.sp1 = nn.probability.bijector.Softflus(2) >>> >>> def construct(self, value): >>> # Similar calls can be made to other functions >>> # by replacing 'forward' by the name of the function. >>> ans1 = self.sp1.forward(value) >>> ans2 = self.sp1.inverse(value) >>> ans3 = self.sp1.forward_log_jacobian(value) >>> ans4 = self.sp1.inverse_log_jacobian(value)
mindspore.nn.probability.bnn_layers
bnn_layers are the high-level components used to construct the bayesian neural network.
- class mindspore.nn.probability.bnn_layers.ConvReparam(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_prior_fn=NormalPrior, weight_posterior_fn=<lambda name, shape: NormalPosterior(name=name, shape=shape)>, bias_prior_fn=NormalPrior, bias_posterior_fn=<lambda name, shape: NormalPosterior(name=name, shape=shape)>)[source]
Convolutional variational layers with Reparameterization.
For more details, refer to the paper Auto-Encoding Variational Bayes.
- Parameters
in_channels (int) – The number of input channel \(C_{in}\).
out_channels (int) – The number of output channel \(C_{out}\).
kernel_size (Union[int, tuple[int]]) – The data type is an integer or a tuple of 2 integers. The kernel size specifies the height and width of the 2D convolution window. a single integer stands for the value is for both height and width of the kernel. With the kernel_size being a tuple of 2 integers, the first value is for the height and the other is the width of the kernel.
stride (Union[int, tuple[int]]) – The distance of kernel moving, an integer number represents that the height and width of movement are both strides, or a tuple of two integers numbers represents that height and width of movement respectively. Default: 1.
pad_mode (str) –
Specifies the padding mode. The optional values are “same”, “valid”, and “pad”. Default: “same”.
same: Adopts the way of completion. Output height and width will be the same as the input. The total number of padding will be calculated for in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side. If this mode is set, padding must be 0.
valid: Adopts the way of discarding. The possible largest height and width of the output will be returned without padding. Extra pixels will be discarded. If this mode is set, padding must be 0.
pad: Implicit paddings on both sides of the input. The number of padding will be padded to the input Tensor borders. padding must be greater than or equal to 0.
padding (Union[int, tuple[int]]) – Implicit paddings on both sides of the input. Default: 0.
dilation (Union[int, tuple[int]]) – The data type is an integer or a tuple of 2 integers. This parameter specifies the dilation rate of the dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater or equal to 1 and bounded by the height and width of the input. Default: 1.
group (int) – Splits filter into groups, in_ channels and out_channels must be divisible by the number of groups. Default: 1.
has_bias (bool) – Specifies whether the layer uses a bias vector. Default: False.
weight_prior_fn – The prior distribution for weight. It must return a mindspore distribution instance. Default: NormalPrior. (which creates an instance of standard normal distribution). The current version only supports normal distribution.
weight_posterior_fn – The posterior distribution for sampling weight. It must be a function handle which returns a mindspore distribution instance. Default: lambda name, shape: NormalPosterior(name=name, shape=shape). The current version only supports normal distribution.
bias_prior_fn – The prior distribution for bias vector. It must return a mindspore distribution. Default: NormalPrior(which creates an instance of standard normal distribution). The current version only supports normal distribution.
bias_posterior_fn – The posterior distribution for sampling bias vector. It must be a function handle which returns a mindspore distribution instance. Default: lambda name, shape: NormalPosterior(name=name, shape=shape). The current version only supports normal distribution.
- Inputs:
input (Tensor) - The shape of the tensor is \((N, C_{in}, H_{in}, W_{in})\).
- Outputs:
Tensor, with the shape being \((N, C_{out}, H_{out}, W_{out})\).
Examples
>>> net = ConvReparam(120, 240, 4, has_bias=False) >>> input = Tensor(np.ones([1, 120, 1024, 640]), mindspore.float32) >>> net(input).shape (1, 240, 1024, 640)
- class mindspore.nn.probability.bnn_layers.DenseReparam(in_channels, out_channels, activation=None, has_bias=True, weight_prior_fn=NormalPrior, weight_posterior_fn=<lambda name, shape: NormalPosterior(name=name, shape=shape)>, bias_prior_fn=NormalPrior, bias_posterior_fn=<lambda name, shape: NormalPosterior(name=name, shape=shape)>)[source]
Dense variational layers with Reparameterization.
For more details, refer to the paper Auto-Encoding Variational Bayes.
Applies dense-connected layer to the input. This layer implements the operation as:
\[\text{outputs} = \text{activation}(\text{inputs} * \text{weight} + \text{bias}),\]where \(\text{activation}\) is the activation function passed as the activation argument (if passed in), \(\text{activation}\) is a weight matrix with the same data type as the inputs created by the layer, \(\text{weight}\) is a weight matrix sampling from posterior distribution of weight, and \(\text{bias}\) is a bias vector with the same data type as the inputs created by the layer (only if has_bias is True). The bias vector is sampling from posterior distribution of \(\text{bias}\).
- Parameters
in_channels (int) – The number of input channel.
out_channels (int) – The number of output channel .
has_bias (bool) – Specifies whether the layer uses a bias vector. Default: False.
activation (str, Cell) – A regularization function applied to the output of the layer. The type of activation can be a string (eg. ‘relu’) or a Cell (eg. nn.ReLU()). Note that if the type of activation is Cell, it must be instantiated beforehand. Default: None.
weight_prior_fn – The prior distribution for weight. It must return a mindspore distribution instance. Default: NormalPrior. (which creates an instance of standard normal distribution). The current version only supports normal distribution.
weight_posterior_fn – The posterior distribution for sampling weight. It must be a function handle which returns a mindspore distribution instance. Default: lambda name, shape: NormalPosterior(name=name, shape=shape). The current version only supports normal distribution.
bias_prior_fn – The prior distribution for bias vector. It must return a mindspore distribution. Default: NormalPrior(which creates an instance of standard normal distribution). The current version only supports normal distribution.
bias_posterior_fn – The posterior distribution for sampling bias vector. It must be a function handle which returns a mindspore distribution instance. Default: lambda name, shape: NormalPosterior(name=name, shape=shape). The current version only supports normal distribution.
- Inputs:
input (Tensor) - The shape of the tensor is \((N, in\_channels)\).
- Outputs:
Tensor, the shape of the tensor is \((N, out\_channels)\).
Examples
>>> net = DenseReparam(3, 4) >>> input = Tensor(np.random.randint(0, 255, [2, 3]), mindspore.float32) >>> net(input).shape (2, 4)
- class mindspore.nn.probability.bnn_layers.WithBNNLossCell(backbone, loss_fn, dnn_factor, bnn_factor)[source]
Generate a suitable WithLossCell for BNN to wrap the bayesian network with loss function.
- Parameters
backbone (Cell) – The target network.
loss_fn (Cell) – The loss function used to compute loss.
dnn_factor (int, float) – The coefficient of backbone’s loss, which is computed by the loss function. Default: 1.
bnn_factor (int, float) – The coefficient of KL loss, which is the KL divergence of Bayesian layer. Default: 1.
- Inputs:
data (Tensor) - Tensor of shape \((N, \ldots)\).
label (Tensor) - Tensor of shape \((N, \ldots)\).
- Outputs:
Tensor, a scalar tensor with shape \(()\).
Examples
>>> net = Net() >>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True) >>> net_with_criterion_object = WithBNNLossCell(net, loss_fn) >>> net_with_criterion = net_with_criterion_object() >>> >>> batch_size = 2 >>> data = Tensor(np.ones([batch_size, 3, 64, 64]).astype(np.float32) * 0.01) >>> label = Tensor(np.ones([batch_size, 1, 1, 1]).astype(np.int32)) >>> >>> net_with_criterion(data, label)
- class mindspore.nn.probability.bnn_layers.NormalPosterior(name, shape, dtype=mindspore.float32, loc_mean=0, loc_std=0.1, untransformed_scale_mean=- 5, untransformed_scale_std=0.1)[source]
Build Normal distributions with trainable parameters.
- Parameters
name (str) – Name prepended to trainable parameter.
shape (list, tuple) – Shape of the mean and standard deviation.
dtype (
mindspore.dtype
) – The argument is used to define the data type of the output tensor. Default: mindspore.float32.loc_mean (int, float) – Mean of distribution to initialize trainable parameters. Default: 0.
loc_std (int, float) – Standard deviation of distribution to initialize trainable parameters. Default: 0.1.
untransformed_scale_mean (int, float) – Mean of distribution to initialize trainable parameters. Default: -5.
untransformed_scale_std (int, float) – Standard deviation of distribution to initialize trainable parameters. Default: 0.1.
- Returns
Cell, a normal distribution.
- class mindspore.nn.probability.bnn_layers.NormalPrior(dtype=mindspore.float32, mean=0, std=0.1)[source]
To initialize a normal distribution of mean 0 and standard deviation 0.1.
- Parameters
dtype (
mindspore.dtype
) – The argument is used to define the data type of the output tensor. Default: mindspore.float32.mean (int, float) – Mean of normal distribution. Default: 0.
std (int, float) – Standard deviation of normal distribution. Default: 0.1.
- Returns
Cell, a normal distribution.
mindspore.nn.probability.distribution
Distributions are the high-level components used to construct the probabilistic network.
- class mindspore.nn.probability.distribution.Bernoulli(probs=None, seed=None, dtype=mindspore.int32, name='Bernoulli')[source]
Bernoulli Distribution.
- Parameters
probs (float, list, numpy.ndarray, Tensor, Parameter) – The probability of that the outcome is 1.
seed (int) – The seed used in sampling. The global seed is used if it is None. Default: None.
dtype (mindspore.dtype) – The type of the event samples. Default: mstype.int32.
name (str) – The name of the distribution. Default: ‘Bernoulli’.
Note
probs must be a proper probability (0 < p < 1). dist_spec_args is probs.
Examples
>>> # To initialize a Bernoulli distribution of the probability 0.5. >>> import mindspore.nn.probability.distribution as msd >>> b = msd.Bernoulli(0.5, dtype=mstype.int32) >>> >>> # The following creates two independent Bernoulli distributions. >>> b = msd.Bernoulli([0.5, 0.5], dtype=mstype.int32) >>> >>> # A Bernoulli distribution can be initilized without arguments. >>> # In this case, `probs` must be passed in through arguments during function calls. >>> b = msd.Bernoulli(dtype=mstype.int32) >>> >>> # To use the Bernoulli distribution in a network. >>> class net(Cell): >>> def __init__(self): >>> super(net, self).__init__(): >>> self.b1 = msd.Bernoulli(0.5, dtype=mstype.int32) >>> self.b2 = msd.Bernoulli(dtype=mstype.int32) >>> >>> # All the following calls in construct are valid. >>> def construct(self, value, probs_b, probs_a): >>> >>> # Private interfaces of probability functions corresponding to public interfaces, including >>> # `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`, are the same as follows. >>> # Args: >>> # value (Tensor): the value to be evaluated. >>> # probs1 (Tensor): the probability of success. Default: self.probs. >>> >>> # Examples of `prob`. >>> # Similar calls can be made to other probability functions >>> # by replacing `prob` by the name of the function. >>> ans = self.b1.prob(value) >>> # Evaluate `prob` with respect to distribution b. >>> ans = self.b1.prob(value, probs_b) >>> # `probs` must be passed in during function calls. >>> ans = self.b2.prob(value, probs_a) >>> >>> >>> # Functions `mean`, `sd`, `var`, and `entropy` have the same arguments. >>> # Args: >>> # probs1 (Tensor): the probability of success. Default: self.probs. >>> >>> # Examples of `mean`. `sd`, `var`, and `entropy` are similar. >>> ans = self.b1.mean() # return 0.5 >>> ans = self.b1.mean(probs_b) # return probs_b >>> # `probs` must be passed in during function calls. >>> ans = self.b2.mean(probs_a) >>> >>> >>> # Interfaces of `kl_loss` and `cross_entropy` are the same as follows: >>> # Args: >>> # dist (str): the name of the distribution. Only 'Bernoulli' is supported. >>> # probs1_b (Tensor): the probability of success of distribution b. >>> # probs1_a (Tensor): the probability of success of distribution a. Default: self.probs. >>> >>> # Examples of kl_loss. `cross_entropy` is similar. >>> ans = self.b1.kl_loss('Bernoulli', probs_b) >>> ans = self.b1.kl_loss('Bernoulli', probs_b, probs_a) >>> # An additional `probs_a` must be passed in. >>> ans = self.b2.kl_loss('Bernoulli', probs_b, probs_a) >>> >>> >>> # Examples of `sample`. >>> # Args: >>> # shape (tuple): the shape of the sample. Default: (). >>> # probs1 (Tensor): the probability of success. Default: self.probs. >>> ans = self.b1.sample() >>> ans = self.b1.sample((2,3)) >>> ans = self.b1.sample((2,3), probs_b) >>> ans = self.b2.sample((2,3), probs_a)
- property probs
Return the probability of that the outcome is 1.
- class mindspore.nn.probability.distribution.Categorical(probs=None, logits=None, seed=None, dtype=mindspore.int32, name='Categorical')[source]
Create a categorical distribution parameterized by either probabilities or logits (but not both).
- Parameters
probs (Tensor, list, numpy.ndarray, Parameter) – Event probabilities.
logits (Tensor, list, numpy.ndarray, Parameter, float) – Event log-odds.
seed (int) – The global seed is used in sampling. Global seed is used if it is None. Default: None.
dtype (mindspore.dtype) – The type of the distribution. Default: mstype.int32.
name (str) – The name of the distribution. Default: Categorical.
Note
probs must be non-negative, finite and have a non-zero sum, and it will be normalized to sum to 1.
Examples
>>> # To initialize a Categorical distribution of prob is [0.5, 0.5] >>> import mindspore.nn.probability.distribution as msd >>> b = msd.Categorical(probs = [0.5, 0.5], dtype=mstype.int32) >>> >>> # To use Categorical in a network >>> class net(Cell): >>> def __init__(self, probs): >>> super(net, self).__init__(): >>> self.ca = msd.Categorical(probs=probs, dtype=mstype.int32) >>> # All the following calls in construct are valid >>> def construct(self, value): >>> >>> # Similar calls can be made to logits >>> ans = self.ca.probs >>> # value must be Tensor(mstype.float32, bool, mstype.int32) >>> ans = self.ca.log_prob(value) >>> >>> # Usage of enumerate_support >>> ans = self.ca.enumerate_support() >>> >>> # Usage of entropy >>> ans = self.ca.entropy() >>> >>> # Sample >>> ans = self.ca.sample() >>> ans = self.ca.sample((2,3)) >>> ans = self.ca.sample((2,))
- enumerate_support(expand=True)[source]
Enumerate categories.
- Parameters
expand (Bool) – Whether to expand.
- property logits
Return the logits.
- property probs
Return the probability.
- class mindspore.nn.probability.distribution.Distribution(seed, dtype, name, param)[source]
Base class for all mathematical distributions.
- Parameters
seed (int) – The seed is used in sampling. The global seed is used if it is None.
dtype (mindspore.dtype) – The type of the event samples.
name (str) – The name of the distribution.
param (dict) – The parameters used to initialize the distribution.
Note
Derived class must override operations such as _mean, _prob, and _log_prob. Required arguments, such as value for _prob, must be passed in through args or kwargs. dist_spec_args which specifies a new distribution are optional.
dist_spec_args is unique for each type of distribution. For example, mean and sd are the dist_spec_args for a Normal distribution, while rate is the dist_spec_args for an Exponential distribution.
For all functions, passing in dist_spec_args, is optional. Function calls with the additional dist_spec_args passed in will evaluate the result with a new distribution specified by the dist_spec_args. However, it will not change the original distribution.
- cdf(value, *args, **kwargs)[source]
Evaluate the cdf at given value.
- Parameters
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- construct(name, *args, **kwargs)[source]
Override construct in Cell.
Note
Names of supported functions include: ‘prob’, ‘log_prob’, ‘cdf’, ‘log_cdf’, ‘survival_function’, ‘log_survival’ ‘var’, ‘sd’, ‘entropy’, ‘kl_loss’, ‘cross_entropy’, and ‘sample’.
- cross_entropy(dist, *args, **kwargs)[source]
Evaluate the cross_entropy between distribution a and b.
- Parameters
Note
dist_spec_args of distribution b must be passed to the function through args or kwargs. Passing in dist_spec_args of distribution a is optional.
- entropy(*args, **kwargs)[source]
Evaluate the entropy.
- Parameters
*args (list) – the list of positional arguments forwarded to subclasses.
**kwargs (dictionary) – the dictionary of keyword arguments forwarded to subclasses.
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- kl_loss(dist, *args, **kwargs)[source]
Evaluate the KL divergence, i.e. KL(a||b).
- Parameters
Note
dist_spec_args of distribution b must be passed to the function through args or kwargs. Passing in dist_spec_args of distribution a is optional.
- log_cdf(value, *args, **kwargs)[source]
Evaluate the log cdf at given value.
- Parameters
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- log_prob(value, *args, **kwargs)[source]
Evaluate the log probability(pdf or pmf) at the given value.
- Parameters
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- log_survival(value, *args, **kwargs)[source]
Evaluate the log survival function at given value.
- Parameters
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- mean(*args, **kwargs)[source]
Evaluate the mean.
- Parameters
*args (list) – the list of positional arguments forwarded to subclasses.
**kwargs (dictionary) – the dictionary of keyword arguments forwarded to subclasses.
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- mode(*args, **kwargs)[source]
Evaluate the mode.
- Parameters
*args (list) – the list of positional arguments forwarded to subclasses.
**kwargs (dictionary) – the dictionary of keyword arguments forwarded to subclasses.
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- prob(value, *args, **kwargs)[source]
Evaluate the probability (pdf or pmf) at given value.
- Parameters
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- sample(*args, **kwargs)[source]
Sampling function.
- Parameters
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- sd(*args, **kwargs)[source]
Evaluate the standard deviation.
- Parameters
*args (list) – the list of positional arguments forwarded to subclasses.
**kwargs (dictionary) – the dictionary of keyword arguments forwarded to subclasses.
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- survival_function(value, *args, **kwargs)[source]
Evaluate the survival function at given value.
- Parameters
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- var(*args, **kwargs)[source]
Evaluate the variance.
- Parameters
*args (list) – the list of positional arguments forwarded to subclasses.
**kwargs (dictionary) – the dictionary of keyword arguments forwarded to subclasses.
Note
A distribution can be optionally passed to the function by passing its dist_spec_args through args or kwargs.
- class mindspore.nn.probability.distribution.Exponential(rate=None, seed=None, dtype=mindspore.float32, name='Exponential')[source]
Example class: Exponential Distribution.
- Parameters
rate (float, list, numpy.ndarray, Tensor, Parameter) – The inverse scale.
seed (int) – The seed used in sampling. The global seed is used if it is None. Default: None.
dtype (mindspore.dtype) – The type of the event samples. Default: mstype.float32.
name (str) – The name of the distribution. Default: ‘Exponential’.
Note
rate must be strictly greater than 0. dist_spec_args is rate. dtype must be a float type because Exponential distributions are continuous.
Examples
>>> # To initialize an Exponential distribution of the rate 0.5. >>> import mindspore.nn.probability.distribution as msd >>> e = msd.Exponential(0.5, dtype=mstype.float32) >>> >>> # The following creates two independent Exponential distributions. >>> e = msd.Exponential([0.5, 0.5], dtype=mstype.float32) >>> >>> # An Exponential distribution can be initilized without arguments. >>> # In this case, `rate` must be passed in through `args` during function calls. >>> e = msd.Exponential(dtype=mstype.float32) >>> >>> # To use an Exponential distribution in a network. >>> class net(Cell): >>> def __init__(self): >>> super(net, self).__init__(): >>> self.e1 = msd.Exponential(0.5, dtype=mstype.float32) >>> self.e2 = msd.Exponential(dtype=mstype.float32) >>> >>> # All the following calls in construct are valid. >>> def construct(self, value, rate_b, rate_a): >>> >>> # Private interfaces of probability functions corresponding to public interfaces, including >>> # `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`, are the same as follows. >>> # Args: >>> # value (Tensor): the value to be evaluated. >>> # rate (Tensor): the rate of the distribution. Default: self.rate. >>> >>> # Examples of `prob`. >>> # Similar calls can be made to other probability functions >>> # by replacing `prob` by the name of the function. >>> ans = self.e1.prob(value) >>> # Evaluate with respect to distribution b. >>> ans = self.e1.prob(value, rate_b) >>> # `rate` must be passed in during function calls. >>> ans = self.e2.prob(value, rate_a) >>> >>> >>> # Functions `mean`, `sd`, 'var', and 'entropy' have the same arguments as follows. >>> # Args: >>> # rate (Tensor): the rate of the distribution. Default: self.rate. >>> >>> # Examples of `mean`. `sd`, `var`, and `entropy` are similar. >>> ans = self.e1.mean() # return 2 >>> ans = self.e1.mean(rate_b) # return 1 / rate_b >>> # `rate` must be passed in during function calls. >>> ans = self.e2.mean(rate_a) >>> >>> >>> # Interfaces of `kl_loss` and `cross_entropy` are the same. >>> # Args: >>> # dist (str): The name of the distribution. Only 'Exponential' is supported. >>> # rate_b (Tensor): the rate of distribution b. >>> # rate_a (Tensor): the rate of distribution a. Default: self.rate. >>> >>> # Examples of `kl_loss`. `cross_entropy` is similar. >>> ans = self.e1.kl_loss('Exponential', rate_b) >>> ans = self.e1.kl_loss('Exponential', rate_b, rate_a) >>> # An additional `rate` must be passed in. >>> ans = self.e2.kl_loss('Exponential', rate_b, rate_a) >>> >>> >>> # Examples of `sample`. >>> # Args: >>> # shape (tuple): the shape of the sample. Default: () >>> # probs1 (Tensor): the rate of the distribution. Default: self.rate. >>> ans = self.e1.sample() >>> ans = self.e1.sample((2,3)) >>> ans = self.e1.sample((2,3), rate_b) >>> ans = self.e2.sample((2,3), rate_a)
- property rate
Return rate of the distribution.
- class mindspore.nn.probability.distribution.Geometric(probs=None, seed=None, dtype=mindspore.int32, name='Geometric')[source]
Geometric Distribution. It represents that there are k failures before the first sucess, namely taht there are in total k+1 Bernoulli trails when the first success is achieved.
- Parameters
probs (float, list, numpy.ndarray, Tensor, Parameter) – The probability of success.
seed (int) – The seed used in sampling. Global seed is used if it is None. Default: None.
dtype (mindspore.dtype) – The type of the event samples. Default: mstype.int32.
name (str) – The name of the distribution. Default: ‘Geometric’.
Note
probs must be a proper probability (0 < p < 1). dist_spec_args is probs.
Examples
>>> # To initialize a Geometric distribution of the probability 0.5. >>> import mindspore.nn.probability.distribution as msd >>> n = msd.Geometric(0.5, dtype=mstype.int32) >>> >>> # The following creates two independent Geometric distributions. >>> n = msd.Geometric([0.5, 0.5], dtype=mstype.int32) >>> >>> # A Geometric distribution can be initilized without arguments. >>> # In this case, `probs` must be passed in through arguments during function calls. >>> n = msd.Geometric(dtype=mstype.int32) >>> >>> # To use a Geometric distribution in a network. >>> class net(Cell): >>> def __init__(self): >>> super(net, self).__init__(): >>> self.g1 = msd.Geometric(0.5, dtype=mstype.int32) >>> self.g2 = msd.Geometric(dtype=mstype.int32) >>> >>> # The following calls are valid in construct. >>> def construct(self, value, probs_b, probs_a): >>> >>> # Private interfaces of probability functions corresponding to public interfaces, including >>> # `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`, have the same arguments as follows. >>> # Args: >>> # value (Tensor): the value to be evaluated. >>> # probs1 (Tensor): the probability of success of a Bernoulli trail. Default: self.probs. >>> >>> # Examples of `prob`. >>> # Similar calls can be made to other probability functions >>> # by replacing `prob` by the name of the function. >>> ans = self.g1.prob(value) >>> # Evaluate with respect to distribution b. >>> ans = self.g1.prob(value, probs_b) >>> # `probs` must be passed in during function calls. >>> ans = self.g2.prob(value, probs_a) >>> >>> >>> # Functions `mean`, `sd`, `var`, and `entropy` have the same arguments. >>> # Args: >>> # probs1 (Tensor): the probability of success of a Bernoulli trail. Default: self.probs. >>> >>> # Examples of `mean`. `sd`, `var`, and `entropy` are similar. >>> ans = self.g1.mean() # return 1.0 >>> ans = self.g1.mean(probs_b) >>> # Probs must be passed in during function calls >>> ans = self.g2.mean(probs_a) >>> >>> >>> # Interfaces of 'kl_loss' and 'cross_entropy' are the same. >>> # Args: >>> # dist (str): the name of the distribution. Only 'Geometric' is supported. >>> # probs1_b (Tensor): the probability of success of a Bernoulli trail of distribution b. >>> # probs1_a (Tensor): the probability of success of a Bernoulli trail of distribution a. Default: self.probs. >>> >>> # Examples of `kl_loss`. `cross_entropy` is similar. >>> ans = self.g1.kl_loss('Geometric', probs_b) >>> ans = self.g1.kl_loss('Geometric', probs_b, probs_a) >>> # An additional `probs` must be passed in. >>> ans = self.g2.kl_loss('Geometric', probs_b, probs_a) >>> >>> >>> # Examples of `sample`. >>> # Args: >>> # shape (tuple): the shape of the sample. Default: () >>> # probs1 (Tensor): the probability of success of a Bernoulli trail. Default: self.probs. >>> ans = self.g1.sample() >>> ans = self.g1.sample((2,3)) >>> ans = self.g1.sample((2,3), probs_b) >>> ans = self.g2.sample((2,3), probs_a)
- property probs
Return the probability of success of the Bernoulli trail.
- class mindspore.nn.probability.distribution.Normal(mean=None, sd=None, seed=None, dtype=mindspore.float32, name='Normal')[source]
Normal distribution.
- Parameters
mean (int, float, list, numpy.ndarray, Tensor, Parameter) – The mean of the Normal distribution.
sd (int, float, list, numpy.ndarray, Tensor, Parameter) – The standard deviation of the Normal distribution.
seed (int) – The seed used in sampling. The global seed is used if it is None. Default: None.
dtype (mindspore.dtype) – The type of the event samples. Default: mstype.float32.
name (str) – The name of the distribution. Default: ‘Normal’.
Note
sd must be greater than zero. dist_spec_args are mean and sd. dtype must be a float type because Normal distributions are continuous.
Examples
>>> # To initialize a Normal distribution of the mean 3.0 and the standard deviation 4.0. >>> import mindspore.nn.probability.distribution as msd >>> n = msd.Normal(3.0, 4.0, dtype=mstype.float32) >>> >>> # The following creates two independent Normal distributions. >>> n = msd.Normal([3.0, 3.0], [4.0, 4.0], dtype=mstype.float32) >>> >>> # A Normal distribution can be initilize without arguments. >>> # In this case, `mean` and `sd` must be passed in through arguments. >>> n = msd.Normal(dtype=mstype.float32) >>> >>> # To use a Normal distribution in a network. >>> class net(Cell): >>> def __init__(self): >>> super(net, self).__init__(): >>> self.n1 = msd.Nomral(0.0, 1.0, dtype=mstype.float32) >>> self.n2 = msd.Normal(dtype=mstype.float32) >>> >>> # The following calls are valid in construct. >>> def construct(self, value, mean_b, sd_b, mean_a, sd_a): >>> >>> # Private interfaces of probability functions corresponding to public interfaces, including >>> # `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`, have the same arguments as follows. >>> # Args: >>> # value (Tensor): the value to be evaluated. >>> # mean (Tensor): the mean of distribution. Default: self._mean_value. >>> # sd (Tensor): the standard deviation of distribution. Default: self._sd_value. >>> >>> # Examples of `prob`. >>> # Similar calls can be made to other probability functions >>> # by replacing 'prob' by the name of the function >>> ans = self.n1.prob(value) >>> # Evaluate with respect to distribution b. >>> ans = self.n1.prob(value, mean_b, sd_b) >>> # `mean` and `sd` must be passed in during function calls >>> ans = self.n2.prob(value, mean_a, sd_a) >>> >>> >>> # Functions `mean`, `sd`, `var`, and `entropy` have the same arguments. >>> # Args: >>> # mean (Tensor): the mean of distribution. Default: self._mean_value. >>> # sd (Tensor): the standard deviation of distribution. Default: self._sd_value. >>> >>> # Example of `mean`. `sd`, `var`, and `entropy` are similar. >>> ans = self.n1.mean() # return 0.0 >>> ans = self.n1.mean(mean_b, sd_b) # return mean_b >>> # `mean` and `sd` must be passed in during function calls. >>> ans = self.n2.mean(mean_a, sd_a) >>> >>> >>> # Interfaces of 'kl_loss' and 'cross_entropy' are the same: >>> # Args: >>> # dist (str): the type of the distributions. Only "Normal" is supported. >>> # mean_b (Tensor): the mean of distribution b. >>> # sd_b (Tensor): the standard deviation distribution b. >>> # mean_a (Tensor): the mean of distribution a. Default: self._mean_value. >>> # sd_a (Tensor): the standard deviation distribution a. Default: self._sd_value. >>> >>> # Examples of `kl_loss`. `cross_entropy` is similar. >>> ans = self.n1.kl_loss('Normal', mean_b, sd_b) >>> ans = self.n1.kl_loss('Normal', mean_b, sd_b, mean_a, sd_a) >>> # Additional `mean` and `sd` must be passed in. >>> ans = self.n2.kl_loss('Normal', mean_b, sd_b, mean_a, sd_a) >>> >>> # Examples of `sample`. >>> # Args: >>> # shape (tuple): the shape of the sample. Default: () >>> # mean (Tensor): the mean of the distribution. Default: self._mean_value. >>> # sd (Tensor): the standard deviation of the distribution. Default: self._sd_value. >>> ans = self.n1.sample() >>> ans = self.n1.sample((2,3)) >>> ans = self.n1.sample((2,3), mean_b, sd_b) >>> ans = self.n2.sample((2,3), mean_a, sd_a)
- class mindspore.nn.probability.distribution.TransformedDistribution(bijector, distribution, dtype, seed=None, name='transformed_distribution')[source]
Transformed Distribution. This class contains a bijector and a distribution and transforms the original distribution to a new distribution through the operation defined by the bijector.
- Parameters
bijector (Bijector) – The transformation to perform.
distribution (Distribution) – The original distribution.
name (str) – The name of the transformed distribution. Default: ‘transformed_distribution’.
Note
The arguments used to initialize the original distribution cannot be None. For example, mynormal = nn.Normal(dtype=dtyple.float32) cannot be used to initialized a TransformedDistribution since mean and sd are not specified.
Examples
>>> # To initialize a transformed distribution, e.g. a lognormal distribution, >>> # using a Normal distribution as the base distribution, and an Exp bijector as the bijector function. >>> import mindspore.nn.probability.distribution as msd >>> import mindspore.nn.probability.bijector as msb >>> ln = msd.TransformedDistribution(msb.Exp(), >>> msd.Normal(0.0, 1.0, dtype=mstype.float32), >>> dtype=mstype.float32) >>> >>> # To use a transformed distribution in a network. >>> class net(Cell): >>> def __init__(self): >>> super(net, self).__init__(): >>> self.ln = msd.TransformedDistribution(msb.Exp(), >>> msd.Normal(0.0, 1.0, dtype=mstype.float32), >>> dtype=mstype.float32) >>> >>> def construct(self, value): >>> # Similar calls can be made to other functions >>> # by replacing 'sample' by the name of the function. >>> ans = self.ln.sample(shape=(2, 3))
- class mindspore.nn.probability.distribution.Uniform(low=None, high=None, seed=None, dtype=mindspore.float32, name='Uniform')[source]
Example class: Uniform Distribution.
- Parameters
low (int, float, list, numpy.ndarray, Tensor, Parameter) – The lower bound of the distribution.
high (int, float, list, numpy.ndarray, Tensor, Parameter) – The upper bound of the distribution.
seed (int) – The seed uses in sampling. The global seed is used if it is None. Default: None.
dtype (mindspore.dtype) – The type of the event samples. Default: mstype.float32.
name (str) – The name of the distribution. Default: ‘Uniform’.
Note
low must be stricly less than high. dist_spec_args are high and low. dtype must be float type because Uniform distributions are continuous.
Examples
>>> # To initialize a Uniform distribution of the lower bound 0.0 and the higher bound 1.0. >>> import mindspore.nn.probability.distribution as msd >>> u = msd.Uniform(0.0, 1.0, dtype=mstype.float32) >>> >>> # The following creates two independent Uniform distributions. >>> u = msd.Uniform([0.0, 0.0], [1.0, 2.0], dtype=mstype.float32) >>> >>> # A Uniform distribution can be initilized without arguments. >>> # In this case, `high` and `low` must be passed in through arguments during function calls. >>> u = msd.Uniform(dtype=mstype.float32) >>> >>> # To use a Uniform distribution in a network. >>> class net(Cell): >>> def __init__(self) >>> super(net, self).__init__(): >>> self.u1 = msd.Uniform(0.0, 1.0, dtype=mstype.float32) >>> self.u2 = msd.Uniform(dtype=mstype.float32) >>> >>> # All the following calls in construct are valid. >>> def construct(self, value, low_b, high_b, low_a, high_a): >>> >>> # Private interfaces of probability functions corresponding to public interfaces, including >>> # `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`, have the same arguments. >>> # Args: >>> # value (Tensor): the value to be evaluated. >>> # low (Tensor): the lower bound of distribution. Default: self.low. >>> # high (Tensor): the higher bound of distribution. Default: self.high. >>> >>> # Examples of `prob`. >>> # Similar calls can be made to other probability functions >>> # by replacing 'prob' by the name of the function. >>> ans = self.u1.prob(value) >>> # Evaluate with respect to distribution b. >>> ans = self.u1.prob(value, low_b, high_b) >>> # `high` and `low` must be passed in during function calls. >>> ans = self.u2.prob(value, low_a, high_a) >>> >>> >>> # Functions `mean`, `sd`, `var`, and `entropy` have the same arguments. >>> # Args: >>> # low (Tensor): the lower bound of distribution. Default: self.low. >>> # high (Tensor): the higher bound of distribution. Default: self.high. >>> >>> # Examples of `mean`. `sd`, `var`, and `entropy` are similar. >>> ans = self.u1.mean() # return 0.5 >>> ans = self.u1.mean(low_b, high_b) # return (low_b + high_b) / 2 >>> # `high` and `low` must be passed in during function calls. >>> ans = self.u2.mean(low_a, high_a) >>> >>> # Interfaces of 'kl_loss' and 'cross_entropy' are the same. >>> # Args: >>> # dist (str): the type of the distributions. Should be "Uniform" in this case. >>> # low_b (Tensor): the lower bound of distribution b. >>> # high_b (Tensor): the upper bound of distribution b. >>> # low_a (Tensor): the lower bound of distribution a. Default: self.low. >>> # high_a (Tensor): the upper bound of distribution a. Default: self.high. >>> >>> # Examples of `kl_loss`. `cross_entropy` is similar. >>> ans = self.u1.kl_loss('Uniform', low_b, high_b) >>> ans = self.u1.kl_loss('Uniform', low_b, high_b, low_a, high_a) >>> # Additional `high` and `low` must be passed in. >>> ans = self.u2.kl_loss('Uniform', low_b, high_b, low_a, high_a) >>> >>> >>> # Examples of `sample`. >>> # Args: >>> # shape (tuple): the shape of the sample. Default: () >>> # low (Tensor): the lower bound of the distribution. Default: self.low. >>> # high (Tensor): the upper bound of the distribution. Default: self.high. >>> ans = self.u1.sample() >>> ans = self.u1.sample((2,3)) >>> ans = self.u1.sample((2,3), low_b, high_b) >>> ans = self.u2.sample((2,3), low_a, high_a)
- property high
Return the upper bound of the distribution.
- property low
Return the lower bound of the distribution.
mindspore.nn.probability.dpn
Deep probability network such as BNN and VAE network.
- class mindspore.nn.probability.dpn.ConditionalVAE(encoder, decoder, hidden_size, latent_size, num_classes)[source]
Conditional Variational Auto-Encoder (CVAE).
The difference with VAE is that CVAE uses labels information. For more details, refer to Learning Structured Output Representation using Deep Conditional Generative Models.
Note
When encoder and decoder ard defined, the shape of the encoder’s output tensor and decoder’s input tensor must be \((N, hidden\_size)\). The latent_size must be less than or equal to the hidden_size.
- Parameters
- Inputs:
input_x (Tensor) - The shape of input tensor is \((N, C, H, W)\), which is the same as the input of encoder.
input_y (Tensor) - The tensor of the target data, the shape is \((N,)\).
- Outputs:
output (tuple) - (recon_x(Tensor), x(Tensor), mu(Tensor), std(Tensor)).
- construct(x, y)[source]
The input are x and y, so the WithLossCell method needs to be rewritten when using cvae interface.
- class mindspore.nn.probability.dpn.VAE(encoder, decoder, hidden_size, latent_size)[source]
Variational Auto-Encoder (VAE).
The VAE defines a generative model, Z is sampled from the prior, then used to reconstruct X by a decoder. For more details, refer to Auto-Encoding Variational Bayes.
Note
When the encoder and decoder are defined, the shape of the encoder’s output tensor and decoder’s input tensor must be \((N, hidden\_size)\). The latent_size must be less than or equal to the hidden_size.
- Parameters
- Inputs:
input (Tensor) - The shape of input tensor is \((N, C, H, W)\), which is the same as the input of encoder.
- Outputs:
output (Tuple) - (recon_x(Tensor), x(Tensor), mu(Tensor), std(Tensor)).
mindspore.nn.probability.infer
Inference algorithms in Probabilistic Programming.
- class mindspore.nn.probability.infer.ELBO(latent_prior='Normal', output_prior='Normal')[source]
The Evidence Lower Bound (ELBO).
Variational inference minimizes the Kullback-Leibler (KL) divergence from the variational distribution to the posterior distribution. It maximizes the ELBO, a lower bound on the logarithm of the marginal probability of the observations log p(x). The ELBO is equal to the negative KL divergence up to an additive constant. For more details, refer to Variational Inference: A Review for Statisticians.
- Parameters
- Inputs:
input_data (Tuple) - (recon_x(Tensor), x(Tensor), mu(Tensor), std(Tensor)).
target_data (Tensor) - the target tensor of shape \((N,)\).
- Outputs:
Tensor, loss float tensor.
- class mindspore.nn.probability.infer.SVI(net_with_loss, optimizer)[source]
Stochastic Variational Inference(SVI).
Variational inference casts the inference problem as an optimization. Some distributions over the hidden variables are indexed by a set of free parameters, which are optimized to make distributions closest to the posterior of interest. For more details, refer to Variational Inference: A Review for Statisticians.
- Parameters
- run(train_dataset, epochs=10)[source]
Optimize the parameters by training the probability network, and return the trained network.
- Parameters
epochs (int) – Total number of iterations on the data. Default: 10.
train_dataset (Dataset) – A training dataset iterator.
- Outputs:
Cell, the trained probability network.
mindspore.nn.probability.toolbox
Uncertainty toolbox.
- class mindspore.nn.probability.toolbox.UncertaintyEvaluation(model, train_dataset, task_type, num_classes=None, epochs=1, epi_uncer_model_path=None, ale_uncer_model_path=None, save_model=False)[source]
Toolbox for Uncertainty Evaluation.
- Parameters
model (Cell) – The model for uncertainty evaluation.
train_dataset (Dataset) – A dataset iterator to train model.
task_type (str) – Option for the task types of model - regression: A regression model. - classification: A classification model.
num_classes (int) – The number of labels of classification. If the task type is classification, it must be set; otherwise, it is not needed. Default: None.
epochs (int) – Total number of iterations on the data. Default: 1.
epi_uncer_model_path (str) – The save or read path of the epistemic uncertainty model. Default: None.
ale_uncer_model_path (str) – The save or read path of the aleatoric uncertainty model. Default: None.
save_model (bool) – Whether to save the uncertainty model or not, if true, the epi_uncer_model_path and ale_uncer_model_path must not be None. If false, the model to evaluate will be loaded from the the path of the uncertainty model; if the path is not given , it will not save or load the uncertainty model. Default: False.
Examples
>>> network = LeNet() >>> param_dict = load_checkpoint('checkpoint_lenet.ckpt') >>> load_param_into_net(network, param_dict) >>> ds_train = create_dataset('workspace/mnist/train') >>> evaluation = UncertaintyEvaluation(model=network, >>> train_dataset=ds_train, >>> task_type='classification', >>> num_classes=10, >>> epochs=1, >>> epi_uncer_model_path=None, >>> ale_uncer_model_path=None, >>> save_model=False) >>> epistemic_uncertainty = evaluation.eval_epistemic_uncertainty(eval_data) >>> aleatoric_uncertainty = evaluation.eval_aleatoric_uncertainty(eval_data) >>> epistemic_uncertainty.shape (32, 10) >>> aleatoric_uncertainty.shape (32,)
- eval_aleatoric_uncertainty(eval_data)[source]
Evaluate the aleatoric uncertainty of inference results, which also called data uncertainty.
- Parameters
eval_data (Tensor) – The data samples to be evaluated, the shape must be (N,C,H,W).
- Returns
numpy.dtype, the aleatoric uncertainty of inference results of data samples.
- eval_epistemic_uncertainty(eval_data)[source]
Evaluate the epistemic uncertainty of inference results, which also called model uncertainty.
- Parameters
eval_data (Tensor) – The data samples to be evaluated, the shape must be (N,C,H,W).
- Returns
numpy.dtype, the epistemic uncertainty of inference results of data samples.
mindspore.nn.probability.transforms
The high-level components used to transform model between Deep Neural Network (DNN) and Bayesian Neural Network (BNN).
- class mindspore.nn.probability.transforms.TransformToBNN(trainable_dnn, dnn_factor=1, bnn_factor=1)[source]
Transform Deep Neural Network (DNN) model to Bayesian Neural Network (BNN) model.
- Parameters
trainable_dnn (Cell) – A trainable DNN model (backbone) wrapped by TrainOneStepCell.
dnn_factor ((int, float) – The coefficient of backbone’s loss, which is computed by loss function. Default: 1.
bnn_factor (int, float) – The coefficient of KL loss, which is KL divergence of Bayesian layer. Default: 1.
Examples
>>> class Net(nn.Cell): >>> def __init__(self): >>> super(Net, self).__init__() >>> self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal') >>> self.bn = nn.BatchNorm2d(64) >>> self.relu = nn.ReLU() >>> self.flatten = nn.Flatten() >>> self.fc = nn.Dense(64*224*224, 12) # padding=0 >>> >>> def construct(self, x): >>> x = self.conv(x) >>> x = self.bn(x) >>> x = self.relu(x) >>> x = self.flatten(x) >>> out = self.fc(x) >>> return out >>> >>> net = Net() >>> criterion = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True) >>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) >>> net_with_loss = WithLossCell(network, criterion) >>> train_network = TrainOneStepCell(net_with_loss, optim) >>> bnn_transformer = TransformToBNN(train_network, 60000, 0.1)
- transform_to_bnn_layer(dnn_layer_type, bnn_layer_type, get_args=None, add_args=None)[source]
Transform a specific type of layers in DNN model to corresponding BNN layer.
- Parameters
dnn_layer_type (Cell) – The type of DNN layer to be transformed to BNN layer. The optional values are nn.Dense and nn.Conv2d.
bnn_layer_type (Cell) – The type of BNN layer to be transformed to. The optional values are DenseReparam and ConvReparam.
get_args – The arguments gotten from the DNN layer. Default: None.
add_args (dict) – The new arguments added to BNN layer. Note that the arguments in add_args must not duplicate arguments in get_args. Default: None.
- Returns
Cell, a trainable model wrapped by TrainOneStepCell, whose specific type of layer is transformed to the corresponding bayesian layer.
Examples
>>> net = Net() >>> criterion = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True) >>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) >>> net_with_loss = WithLossCell(network, criterion) >>> train_network = TrainOneStepCell(net_with_loss, optim) >>> bnn_transformer = TransformToBNN(train_network, 60000, 0.1) >>> train_bnn_network = bnn_transformer.transform_to_bnn_layer(Dense, DenseReparam)
- transform_to_bnn_model(get_dense_args=<function TransformToBNN.<lambda>>, get_conv_args=<function TransformToBNN.<lambda>>, add_dense_args=None, add_conv_args=None)[source]
Transform the whole DNN model to BNN model, and wrap BNN model by TrainOneStepCell.
- Parameters
get_dense_args – The arguments gotten from the DNN full connection layer. Default: lambda dp: {“in_channels”: dp.in_channels, “out_channels”: dp.out_channels, “has_bias”: dp.has_bias}.
get_conv_args – The arguments gotten from the DNN convolutional layer. Default: lambda dp: {“in_channels”: dp.in_channels, “out_channels”: dp.out_channels, “pad_mode”: dp.pad_mode, “kernel_size”: dp.kernel_size, “stride”: dp.stride, “has_bias”: dp.has_bias}.
add_dense_args (dict) – The new arguments added to BNN full connection layer. Note that the arguments in add_dense_args must not duplicate arguments in get_dense_args. Default: None.
add_conv_args (dict) – The new arguments added to BNN convolutional layer. Note that the arguments in add_conv_args must not duplicate arguments in get_conv_args. Default: None.
- Returns
Cell, a trainable BNN model wrapped by TrainOneStepCell.
Examples
>>> net = Net() >>> criterion = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True) >>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) >>> net_with_loss = WithLossCell(network, criterion) >>> train_network = TrainOneStepCell(net_with_loss, optim) >>> bnn_transformer = TransformToBNN(train_network, 60000, 0.1) >>> train_bnn_network = bnn_transformer.transform_to_bnn_model()