Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

PR

Just a small problem.

I can fix it online!

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindspore.scipy.optimize.minimize

View Source On Gitee
mindspore.scipy.optimize.minimize(func, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)[source]

Minimization of scalar function of one or more variables.

This API for this function matches SciPy with some minor deviations:

  • Gradients of func are calculated automatically using MindSpore’s autodiff support when the value of jac is None.

  • The method argument is required. A exception will be thrown if you don’t specify a solver.

  • Various optional arguments “hess” “hessp” “bounds” “constraints” “tol” “callback” in the SciPy interface have not yet been implemented.

  • Optimization results may differ from SciPy due to differences in the line search implementation.

Note

  • minimize does not yet support differentiation or arguments in the form of multi-dimensional Tensor, but support for both is planned.

  • minimize is not supported on Windows platform yet.

  • LAGRANGE method is only supported on “GPU”.

Parameters
  • func (Callable) – the objective function to be minimized, fun(x,args)>float, where x is a 1-D array with shape (n,) and args is a tuple of the fixed parameters needed to completely specify the function. fun must support differentiation if jac is None.

  • x0 (Tensor) – initial guess. Array of real elements of size (n,), where n is the number of independent variables.

  • args (Tuple) – extra arguments passed to the objective function. Default: () .

  • method (str) – solver type. Should be one of “BFGS” and “LBFGS”, “LAGRANGE”.

  • jac (Callable, optional) – method for computing the gradient vector. Only for “BFGS” and “LBFGS”. if it is None, the gradient will be estimated with gradient of func. if it is a callable, it should be a function that returns the gradient vector: jac(x,args)>array_like,shape(n,) where x is an array with shape (n,) and args is a tuple with the fixed parameters.

  • hess (Callable, optional) – Method for calculating the Hessian Matrix. Not implemented yet.

  • hessp (Callable, optional) – Hessian of objective function times an arbitrary vector p. Not implemented yet.

  • bounds (Sequence, optional) – Sequence of (min, max) pairs for each element in x. Not implemented yet.

  • constraints (Callable, optional) – representing the inequality constrains, each function in constrains indicates the function < 0 as an inequality constrain.

  • tol (float, optional) – tolerance for termination. For detailed control, use solver-specific options. Default: None .

  • callback (Callable, optional) – A callable called after each iteration. Not implemented yet.

  • options (Mapping[str, Any], optional) –

    a dictionary of solver options. All methods accept the following generic options. Default: None .

    • history_size (int): size of buffer used to help to update inv hessian, only used with method=”LBFGS”. Default: 20 .

    • maxiter (int): Maximum number of iterations to perform. Depending on the method each iteration may use several function evaluations.

    The follow options are exclusive to Lagrange method:

    • save_tol (list): list of saving tolerance, with the same length with ‘constrains’.

    • obj_weight (float): weight for objective function, usually between 1.0 - 100000.0.

    • lower (Tensor): lower bound constrain for variables, must have same shape with x0.

    • upper (Tensor): upper bound constrain for variables, must have same shape with x0.

    • learning_rate (float): learning rate for each Adam step.

    • coincide_func (Callable): sub-function representing the common parts between objective function and constrains to avoid redundant computation.

    • rounds (int): times to update Lagrange multipliers.

    • steps (int): steps to apply Adam per round.

    • log_sw (bool): whether to print the loss at each step.

Returns

OptimizeResults, object holding optimization results.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as onp
>>> from mindspore.scipy.optimize import minimize
>>> from mindspore import Tensor
>>> x0 = Tensor(onp.zeros(2).astype(onp.float32))
>>> def func(p):
...     x, y = p
...     return (x ** 2 + y - 11.) ** 2 + (x + y ** 2 - 7.) ** 2
>>> res = minimize(func, x0, method='BFGS', options=dict(maxiter=None, gtol=1e-6))
>>> print(res.x)
[3. 2.]
>>> l_res = minimize(func, x0, method='LBFGS', options=dict(maxiter=None, gtol=1e-6))
>>> print(l_res.x)
[3. 2.]