Method of Feasible Directions (MFD)
The fundamental principle behind the Method of Feasible Directions is to move from one feasible design to an improved feasible design, therefore, the objective function must be reduced and the constraints at the new design point should not be violated.
Used for solving constrained optimization problems.
Usability Characteristics
- A gradient-based method, which will most likely find the local optima.
- May be efficient with a large number of constraints, but in general it is less accurate than Sequential Quadratic Programming and less efficient than Adaptive Response Surface Method.
- One iteration of Method of Feasible Directions will require a number of simulations. The number of simulations required is a function of the number of input variables since finite difference method is used for gradient evaluation. As a result, it may be an expensive method for applications with a large number of input variables.
- Method of Feasible Directions terminates if one of the conditions below are
met:
- One of the two convergence criterias are satisfied.
- Absolute convergence (Absolute Convergence)
- Relative convergence (Relative Convergence (%))
- The maximum number of allowable iterations (Number of Evaluations) is reached.
- An analysis fails and the “Terminate optimization” option is the default (On Failed Evaluation).
- One of the two convergence criterias are satisfied.
- The number of evaluations in each iteration is automatically set and varies due to the finite difference calculations used in the sensitivity calculation. The number of evaluations in each iteration is dependent of the number of variables and the Sensitivity setting. The evaluations required for the finite difference are executed in parallel. The evaluations required for the line search are executed sequentially.
Settings
Parameter | Default | Range | Description |
---|---|---|---|
Maximum Iterations | 25 | >0 | Maximum number of iterations allowed. |
Absolute Convergence | 0.001 | >0.0 | Determines an absolute
convergence tolerance, which is constant and equal to Absolute
Convergence, times the initial objective function
value. The
design has converged when there are three consecutive designs
for which the absolute change in the objective function is less
than this tolerance. There also must not be any constraint whose
allowable violation is exceeded in these three consecutive
designs. Note: A
larger value allows for faster convergence, but worse results could be
achieved.
|
Relative Convergence (%) | 1.0 | >0.0 | The design has converged if the relative (percent) change in
the objective function is less than this value for three
consecutive designs. There also must not be any constraint whose
allowable violation is exceeded in these three consecutive
designs. Note: A
larger value allows for faster convergence, but worse results could be
achieved.
|
On Failed Evaluation | Terminate optimization |
|
|
Parameter | Default | Range | Description |
---|---|---|---|
Max Failed Evaluations | 20,000 | >=0 | When On Failed Evaluations is set to Ignore failed evaluations (1), the optimizer will tolerate failures until this threshold for Max Failed Evaluations. This option is intended to allow the optimizer to stop after an excessive amount of failures. |
Use Perturbation size | No | No or Yes | Enables the use of Perturbation Size, otherwise an internal automatic perturbation size is set. |
Perturbation Size | 0.0001 | >0.0 |
Defines the size of the finite difference perturbation. For a variable x, with upper
and lower bounds (xu and xl, respectively), the following logic is used
to preserve reasonable perturbation sizes across a range of variables
magnitudes:
|
Use Inclusion Matrix | No |
|
|