NLPInterfacePack: C++ Interfaces and Implementation for Non-Linear Programs
Version of the Day
|
Concrete class that tests the derivatives using finite differences. More...
#include <NLPInterfacePack_NLPFirstDerivTester.hpp>
Public Types | |
enum | ETestingMethod |
Public Member Functions | |
STANDARD_COMPOSITION_MEMBERS (CalcFiniteDiffProd, calc_fd_prod) | |
STANDARD_MEMBER_COMPOSITION_MEMBERS (ETestingMethod, fd_testing_method) | |
STANDARD_MEMBER_COMPOSITION_MEMBERS (size_type, num_fd_directions) | |
STANDARD_MEMBER_COMPOSITION_MEMBERS (value_type, warning_tol) | |
STANDARD_MEMBER_COMPOSITION_MEMBERS (value_type, error_tol) | |
NLPFirstDerivTester (const calc_fd_prod_ptr_t &calc_fd_prod=Teuchos::rcp(new CalcFiniteDiffProd()), ETestingMethod fd_testing_method=FD_DIRECTIONAL, size_type num_fd_directions=1, value_type warning_tol=1e-8, value_type error_tol=1e-3) | |
Constructor. More... | |
bool | finite_diff_check (NLP *nlp, const Vector &xo, const Vector *xl, const Vector *xu, const MatrixOp *Gc, const Vector *Gf, bool print_all_warnings, std::ostream *out) const |
This function takes an NLP object and its computed derivatives and function values and validates the functions and the derivatives by evaluating them about the given point x . If all the checks as described in the intro checkout then this function will return true, otherwise it will return false. More... | |
Concrete class that tests the derivatives using finite differences.
There are two options for testing the derivatives by finite differences.
The first option (fd_testing_method==FD_COMPUTE_ALL
) is to compute all of them as dense vectors and matrices. This option can be very expensive in runtime and storage costs. The amount of storage space needed is O(n*m)
and f(x)
and c(x)
will be computed O(n)
times.
The other option (fd_testing_method==FD_DIRECTIONAL
) computes products of the form g'*v
and compares them to the finite difference computed value g_fd'*v
. This method only costs O(n)
storage and two function evaluations per direction (assuming central differences are used. The directions v
are computed randomly between [-1,+1]
so that they are well scaled and should give good results. The option num_fd_directions()
determines how many random directions are used. A value of num_fd_directions() <= 0
means that a single finite difference direction of 1.0
will be used for the test.
This class computes the derivatives using a CalcFiniteDiffProd
object can can use up to fourth-order (central) finite differences but can use as low as first-order one-sided differences.
The client can set the tolerances used to measure if the anylitical values of Gf
and Gc
are close enough to the finite difference values. Let the function h(x) be f(x)
or any cj(x)
, for j = 1...m
. Let gh(i) = d(h(x))/d(x(i))
and fdh(i) = finite_diff(h(x))/d(x(i))
. Then let's define the relative error between the anylitic value and the finite difference value to be:
err(i) = |(gh(i) - fdh(i))| / (||gh||inf + ||fdh||inf + (epsilon)^(1/4))
The above error takes into account the relative sizes of the elements and also allows one or both of the elements to be zero without ending up with 0/0
or something like 1e-16
not comparing with zero.
All errors err(i) >= warning_tol
are reported to *out
if out != NULL
and print_all_warnings==true
. Otherwise, if out != NULL
, only the number of elements and the maxinum violation of the warning tolerance will be printed. The first error err(i) >= error_tol
that is found is reported is reported to *out
if out != NULL
and immediatly finite_diff_check()
returns false
. If all errors err(i) < error_tol
then finite_diff_check()
will return true
.
Given these two tolerances the client can do many things:
Print out all the comparisons that are not equal by setting warning_tol == 0.0 and error_tol = very_large_number.
Print out all suspect comparisons by setting epsilon < warning_tol < 1 and error_tol = very_large_number.
Just validate that matrices are approximatly equal and report the first discrepency if not by setting epsilon < error_tol < 1 and warning_tol >= error_tol.
There is one minor hitch to this testing. For many NLPs, there is a strict region of x where f(x) or c(x) are not defined. In order to help ensure that we stay out of these regions, variable bounds can be included and a scalar max_var_bounds_viol
so that the testing software will never evaluate f(x) or c(x) outside the region:
xl - max_var_bounds_viol <= x <= xu + max_var_bounds_viol
This is an important agreement made with the user.
Definition at line 133 of file NLPInterfacePack_NLPFirstDerivTester.hpp.
Definition at line 137 of file NLPInterfacePack_NLPFirstDerivTester.hpp.
NLPInterfacePack::NLPFirstDerivTester::NLPFirstDerivTester | ( | const calc_fd_prod_ptr_t & | calc_fd_prod = Teuchos::rcp(new CalcFiniteDiffProd()) , |
ETestingMethod | fd_testing_method = FD_DIRECTIONAL , |
||
size_type | num_fd_directions = 1 , |
||
value_type | warning_tol = 1e-8 , |
||
value_type | error_tol = 1e-3 |
||
) |
Constructor.
Definition at line 70 of file NLPInterfacePack_NLPFirstDerivTester.cpp.
NLPInterfacePack::NLPFirstDerivTester::STANDARD_COMPOSITION_MEMBERS | ( | CalcFiniteDiffProd | , |
calc_fd_prod | |||
) |
NLPInterfacePack::NLPFirstDerivTester::STANDARD_MEMBER_COMPOSITION_MEMBERS | ( | ETestingMethod | , |
fd_testing_method | |||
) |
NLPInterfacePack::NLPFirstDerivTester::STANDARD_MEMBER_COMPOSITION_MEMBERS | ( | size_type | , |
num_fd_directions | |||
) |
NLPInterfacePack::NLPFirstDerivTester::STANDARD_MEMBER_COMPOSITION_MEMBERS | ( | value_type | , |
warning_tol | |||
) |
NLPInterfacePack::NLPFirstDerivTester::STANDARD_MEMBER_COMPOSITION_MEMBERS | ( | value_type | , |
error_tol | |||
) |
bool NLPInterfacePack::NLPFirstDerivTester::finite_diff_check | ( | NLP * | nlp, |
const Vector & | xo, | ||
const Vector * | xl, | ||
const Vector * | xu, | ||
const MatrixOp * | Gc, | ||
const Vector * | Gf, | ||
bool | print_all_warnings, | ||
std::ostream * | out | ||
) | const |
This function takes an NLP object and its computed derivatives and function values and validates the functions and the derivatives by evaluating them about the given point x
. If all the checks as described in the intro checkout then this function will return true, otherwise it will return false.
nlp | [in] NLP object used to compute and test derivatives for. |
xo | [in] Point at which the derivatives are computed at. |
xl | [in] If != NULL then this is the lower variable bounds. |
xu | [in] If != NULL then this is the upper variable bounds. If xl != NULL then xu != NULL must also be true and visa-versa or a std::invalid_arguement exceptions will be thrown. |
Gc | [in] A matrix object for the Gc computed at xo. If Gc==NULL then this is not tested for. |
Gf | [in] Gradient of f(x) computed at xo. If Gf==NULL then this is not tested for. |
print_all_warnings | [in] If true then all errors greater than warning_tol will be printed if out!=NULL |
out | [in/out] If != null then some summary information is printed to it and if a derivative does not match up then it prints which derivative failed. If out == 0 then no output is printed. |
true
if all the derivatives check out, and false otherwise. Definition at line 84 of file NLPInterfacePack_NLPFirstDerivTester.cpp.