MOOCHO
Version of the Day
|
Below is the output file MoochoAlgo.out
from the program ExampleNLPBanded.exe
using the command-line arguments
--echo-command-line --nD=3000 --bw=10 --diag-scal=1e+3 --nI=5 --xIl=1e-5 --xo=0.1
given the Moocho.opt
options file shown here.
Here is the other types of output that is associated with this run:
Output file MoochoAlgo.out
:
Warning! The options group 'MoochoSolver' was not found. Using a default set of options ... ******************************************************************** *** Algorithm information output *** *** *** *** Below, information about how the the MOOCHO algorithm is *** *** setup is given and is followed by detailed printouts of the *** *** contents of the algorithm state object (i.e. iteration *** *** quantities) and the algorithm description printout *** *** (if the option MoochoSolver::print_algo = true is set). *** ******************************************************************** *** Echoing input options ... begin_options options_group DecompositionSystemStateStepBuilderStd { null_space_matrix = EXPLICIT; range_space_matrix = ORTHOGONAL; } options_group NLPAlgoConfigMamaJama { line_search_method = FILTER; quasi_newton = BFGS; } options_group NLPSolverClientInterface { calc_conditioning = true; calc_matrix_info_null_space_only = true; calc_matrix_norms = true; feas_tol = 1e-7; journal_output_level = PRINT_ALGORITHM_STEPS; journal_print_digits = 10; max_iter = 20; max_run_time = 2.0; null_space_journal_output_level = PRINT_ITERATION_QUANTITIES; opt_tol = 1e-2; } end_options workspace_MB < 0.0: Setting workspace_MB = n * default_ws_scale * 1e-6 * sizeof(value_type) = 3005 * 10 * 1e-6 * 8 = 0.2404 MB *** Setting up to run MOOCHO on the NLP using a configuration object of type 'MoochoPack::NLPAlgoConfigMamaJama' ... ***************************************************************** *** NLPAlgoConfigMamaJama Configuration *** *** *** *** Here, summary information about how the algorithm is *** *** configured is printed so that the user can see how the *** *** properties of the NLP and the set options influence *** *** how an algorithm is configured. *** ***************************************************************** *** Creating the NLPAlgo algo object ... *** Setting the NLP and track objects to the algo object ... *** Probing the NLP object for supported interfaces ... Detected that NLP object supports the NLPFirstOrder interface! *** Setting option defaults for options not set by the user or determined some other way ... *** End setting default options *** Sorting out some of the options given input options ... The only merit function currently supported is: merit_function_type = L1; *** Setting option defaults for options not set by the user or determined some other way ... max_basis_cond_change_frac < 0 : setting max_basis_cond_change_frac = 1e+4 num_lbfgs_updates_stored < 0 : setting num_lbfgs_updates_stored = 10 hessian_initialization == AUTO: setting hessian_initialization = IDENTITY qp_solver_type == AUTO: setting qp_solver_type = QPKWIK l1_penalty_param_update == AUTO: setting l1_penalty_param_update = MULT_FREE full_steps_after_k < 0 : the line search will never be turned off after so many iterations *** End setting default options qp_solver == QPKWIK and nlp.num_bounded_x() == 5 > 0: Setting quasi_newton == BFGS... The BasisSystem object with concreate type 'AbstractLinAlgPack::BasisSystemPermDirectSparse' supports the BasisSystemPerm interface. Using DecompositionSystemVarReductPermStd to support basis permutations ... *** Creating the state object and setting up iteration quantity objects ... *** Creating and setting the step objects ... Configuring an algorithm for a nonlinear generally constrained NLP ( num_bounded_x > 0 ) ... *** Algorithm Steps *** 1. "EvalNewPoint" (MoochoPack::EvalNewPointStd_Step) 2. "QuasiNormalStep" (MoochoPack::QuasiNormalStepStd_Step) 2.1. "CheckDecompositionFromPy" (MoochoPack::CheckDecompositionFromPy_Step) 2.2. "CheckDecompositionFromRPy" (MoochoPack::CheckDecompositionFromRPy_Step) 3. "ReducedGradient" (MoochoPack::ReducedGradientStd_Step) 4.-1. "CheckSkipBFGSUpdate" (MoochoPack::CheckSkipBFGSUpdateStd_Step) 4. "ReducedHessian" (MoochoPack::ReducedHessianSecantUpdateStd_Step) 5.-1. "SetDBoundsStd" (MoochoPack::SetDBoundsStd_AddedStep) 5. "TangentialStep" (MoochoPack::QPFailureReinitReducedHessian_Step) 6. "CalcDFromYPYZPZ" (MoochoPack::CalcDFromYPYZPZ_Step) 7. "CalcReducedGradLagrangian" (MoochoPack::CalcReducedGradLagrangianStd_AddedStep) 8. "CheckConvergence" (MoochoPack::CheckConvergenceStd_AddedStep) 9.-1. "LineSearchFullStep" (MoochoPack::LineSearchFullStep_Step) 9. "LineSearch" (MoochoPack::LineSearchFailureNewDecompositionSelection_Step) *** NLP *** NLPInterfacePack::ExampleNLPBanded *** Iteration Quantities *** iq_name iq_id concrete type of iq / concrete type of object ----- ------ --------------------------------------------- Gc 0 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::MatrixOp> AbstractLinAlgPack::MatrixPermAggr Gf 15 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense LS_FilterEntries 7 IterationPack::IterQuantityAccessContiguous<std::list<MoochoPack::FilterEntry, std::allocator<MoochoPack::FilterEntry> > > std::list<MoochoPack::FilterEntry, std::allocator<MoochoPack::FilterEntry> > R 3 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::MatrixOpNonsing> ConstrainedOptPack::MatrixDecompRangeOrthog Uy 5 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::MatrixOp> AbstractLinAlgPack::MatrixOpSubView Uz 4 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::MatrixOp> AbstractLinAlgPack::MatrixOpSubView Y 2 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::MatrixOp> ConstrainedOptPack::MatrixIdentConcatStd Ypy 16 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense Z 1 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::MatrixOp> ConstrainedOptPack::MatrixIdentConcatStd Zpz 17 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense act_set_stats 10 IterationPack::IterQuantityAccessContiguous<MoochoPack::ActSetStats> MoochoPack::ActSetStats alpha 24 IterationPack::IterQuantityAccessContiguous<double> double c 14 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense d 18 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense dl 8 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense du 9 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense eta 23 IterationPack::IterQuantityAccessContiguous<double> double f 13 IterationPack::IterQuantityAccessContiguous<double> double feas_kkt_err 28 IterationPack::IterQuantityAccessContiguous<double> double lambda 30 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense mu 25 IterationPack::IterQuantityAccessContiguous<double> double nu 31 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense opt_kkt_err 27 IterationPack::IterQuantityAccessContiguous<double> double phi 26 IterationPack::IterQuantityAccessContiguous<double> double qp_grad 22 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> NULL qp_solver_stats 11 IterationPack::IterQuantityAccessContiguous<ConstrainedOptPack::QPSolverStats> ConstrainedOptPack::QPSolverStats quasi_newton_stats 32 IterationPack::IterQuantityAccessContiguous<MoochoPack::QuasiNewtonStats> MoochoPack::QuasiNewtonStats rGL 29 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> NULL rGf 19 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> NULL rHL 6 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::MatrixSymOp> AbstractLinAlgPack::MatrixSymPosDefCholFactor w 20 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> NULL x 12 IterationPack::IterQuantityAccessContiguous<AbstractLinAlgPack::VectorMutable> AbstractLinAlgPack::VectorMutableDense zeta 21 IterationPack::IterQuantityAccessContiguous<double> double *** Algorithm Description *** 1. "EvalNewPoint" (MoochoPack::EvalNewPointStd_Step) *** Evaluate the new point and update the range/null decomposition if nlp is not initialized then initialize the nlp if x is not updated for any k then set x_k = xinit if Gc_k is not updated Gc_k = Gc(x_k) <: space_x|space_c For Gc_k = [ Gc_k(:,equ_decomp), Gc_k(:,equ_undecomp) ] where: Gc_k(:,equ_decomp) <: space_x|space_c(equ_decomp) has full column rank r Find: Z_k <: space_x|space_null s.t. Gc_k(:,equ_decomp)' * Z_k = 0 Y_k <: space_x|space_range s.t. [Z_k Y_k] is nonsigular R_k <: space_c(equ_decomp)|space_range s.t. R_k = Gc_k(:,equ_decomp)' * Y_k if m > r : Uz_k <: space_c(equ_undecomp)|space_null s.t. Uz_k = Gc_k(:,equ_undecomp)' * Z_k if m > r : Uy_k <: space_c(equ_undecomp)|space_range s.t. Uy_k = Gc_k(:,equ_undecomp)' * Y_k begin update decomposition (class 'MoochoPack::DecompositionSystemHandlerVarReductPerm_Strategy') *** Updating or selecting a new decomposition using a variable reduction *** range/null decomposition object. if decomp_sys does not support the DecompositionSystemVarReductPerm interface then throw exception! if nlp does not support the NLPVarReductPerm interface then throw exception! decomp_updated = false get_new_basis = false new_basis_selected = false if( select_new_decomposition == true ) then get_new_basis = true select_new_decomposition = false end if (decomp_sys does not have a basis) then get_new_basis = true end if (get_new_basis == true) then begin update decomposition (class = 'ConstrainedOptPack::DecompositionSystemVarReductPermStd') *** Variable reduction decomposition (class DecompositionSytemVarReductImp) C = Gc(var_dep,equ_decomp)' (using basis_sys) if C is nearly singular then throw SingularDecomposition exception if D_imp == MAT_IMP_IMPICIT then D = -inv(C)*N represented implicitly (class MatrixVarReductImplicit) else D = -inv(C)*N computed explicity (using basis_sys) end Z = [ D; I ] (class MatrixIdentConcatStd) Uz = Gc(var_indep,equ_undecomp)' - Gc(var_dep,equ_undecomp)'*D begin update Y, R and Uy *** Orthogonal decompositon Y, R and Uy matrices (class DecompositionSystemOrthogonal) Y = [ I; -D' ] (using class MatrixIdentConcatStd) R = C*(I + D*D') Uy = E - F*D' end update of Y, R and Uy end update decomposition if SingularDecomposition exception was not thrown then decomp_updated = true end if (decomp_updated == false) then nlp_selected_basis = false if (nlp.selects_basis() == true) then for each basis returned from nlp.get_basis(...) or nlp.get_next_basis() decomp_sys.set_decomp(Gc_k...) -> Z_k,Y_k,R_k,Uz_k,Uy_k if SingularDecompositon exception was not thrown then nlp_selected_basis = true exit loop end end end if (nlp_selected_basis == false) then decomp_sys.select_decomp(Gc_k...) -> P_var,P_equ,Z,Y,R,Uz,Uy and permute Gc end *** If you get here then no unexpected exceptions were thrown and a new *** basis has been selected num_basis_k = num_basis_k(last_updated) + 1 P_var_last = P_var_current P_equ_last = P_equ_current P_var_current = P_var P_equ_current = P_equ Resort x_k according to P_var_current end end update decomposition if ( (decomp_sys_testing==DST_TEST) or (decomp_sys_testing==DST_DEFAULT and check_results==true) ) then check properties for Z_k, Y_k, R_k, Uz_k and Uy_k end end Gf_k = Gf(x_k) <: space_x if m > 0 and c_k is not updated c_k = c(x_k) <: space_c if f_k is not updated f_k = f(x_k) <: REAL if ( (fd_deriv_testing==FD_TEST) or (fd_deriv_testing==FD_DEFAULT and check_results==true) ) then check Gc_k (if m > 0) and Gf_k by finite differences. end 2. "QuasiNormalStep" (MoochoPack::QuasiNormalStepStd_Step) *** Calculate the range space step py_k = - inv(R_k) * c_k(equ_decomp) Ypy_k = Y_k * py_k 2.1. "CheckDecompositionFromPy" (MoochoPack::CheckDecompositionFromPy_Step) default: beta_min = inf max_decomposition_cond_change_frac = 10000 max_cond = 0.01 * mach_eps beta = norm_inf(py_k) / (norm_inf(c_k(equ_decomp))+small_number) select_new_decomposition = false if num_basis_k is updated then beta_min = beta + 1 end if beta + 1 < beta_min then beta_min = beta + 1 else if (beta + 1) / beta_min > max_decomposition_cond_change_frac then select_new_decomposition = true end end if beta > max_cond then select_new_decomposition = true end if select_new_decomposition == true then new decomposition selection : MoochoPack::NewDecompositionSelectionStd_Strategy if k > max_iter then terminate the algorithm end Select a new basis at current point x_kp1 = x_k alpha_k = 0 k=k+1 goto EvalNewPoint end new decomposition selection end 2.2. "CheckDecompositionFromRPy" (MoochoPack::CheckDecompositionFromRPy_Step) *** Try to detect when the decomposition is becomming illconditioned default: beta_min = inf max_decomposition_cond_change_frac = 10000 beta = norm_inf(R*py_k + c_k(decomp)) / (norm_inf(c_k(decomp))+small_number) select_new_decomposition = false if num_basis_k is updated then beta_min = beta end if beta < beta_min then beta_min = beta else if beta/ beta_min > max_decomposition_cond_change_frac then select_new_decomposition = true end end if select_new_decomposition == true then new decomposition selection : MoochoPack::NewDecompositionSelectionStd_Strategy if k > max_iter then terminate the algorithm end Select a new basis at current point x_kp1 = x_k alpha_k = 0 k=k+1 goto EvalNewPoint end new decomposition selection end 3. "ReducedGradient" (MoochoPack::ReducedGradientStd_Step) *** Evaluate the reduced gradient of the objective funciton rGf_k = Z_k' * Gf_k 4.-1. "CheckSkipBFGSUpdate" (MoochoPack::CheckSkipBFGSUpdateStd_Step) *** Check if we should do the BFGS update if rHL_km1 is update then If Ypy_km1, Zpz_km1, rGL_km1, or c_km1 is not updated then *** Warning, insufficient information to determine if we should *** perform the update. Check for sufficient backward storage. rHL_k = rHL_km1 else *** Check if we are in the proper region ratio = (skip_bfgs_prop_const/sqrt(norm(rGL_km1,2)+norm(c_km1,2))) * (norm(Zpz_km1,2)/norm(Ypy_km1,2) ) if ratio < 1 then rHL_k = rHL_km1 end end end 4. "ReducedHessian" (MoochoPack::ReducedHessianSecantUpdateStd_Step) *** Calculate the reduced hessian of the Lagrangian rHL = Z' * HL * Z default: num_basis_remembered = NO_BASIS_UPDATED_YET iter_k_rHL_init_ident = -1 if num_basis_remembered = NO_BASIS_UPDATED_YET then num_basis_remembered = num_basis end if num_basis_remembered != num_basis then num_basis_remembered = num_basis new_basis = true end if rHL_k is not updated then if new_basis == true then *** Transition rHL to the new basis by just starting over. rHL_k = eye(n-r) *** must support MatrixSymInitDiag interface iter_k_rHL_init_ident = k goto next step end if rHL_km1 and rGf_km1 are updated then *** We should have the information to perform a BFGS update y = rGf_k - rGf_km1 s = alpha_km1 * pz_km1 if k - 1 == iter_k_rHL_init_ident then first_update = true else first_update = false end rHL_k = rHL_km1 begin secant update (MoochoPack::ReducedHessianSecantUpdateBFGSFull_Strategy) *** Perform BFGS update on full matrix where: B = rHL_k if use_dampening == true then if s'*y >= 0.2 * s'*B*s then theta = 1.0 else theta = 0.8*s'*B*s / (s'*B*s - s'*y) end y = theta*y + (1-theta)*B*s end if first_update && rescale_init_identity and y'*s is sufficently positive then B = |(y'*y)/(y'*s)| * eye(size(s)) end if s'*y is sufficently positive then *** Peform BFGS update (B, s, y ) -> B (through MatrixSymSecantUpdate interface) if ( check_results && secant_testing == SECANT_TEST_DEFAULT ) or ( secant_testing == SECANT_TEST_ALWAYS ) then if B*s != y then *** The secant condition does not check out Throw TestFailed! end end end end secant update else *** We have no information for which to preform a BFGS update. k_last_offset = last iteration rHL was updated for if k_last_offset does not exist then *** We are left with no choise but to initialize rHL rHL_k = eye(n-r) *** must support MatrixSymInitDiag interface iter_k_rHL_init_ident = k else *** No new basis has been selected so we may as well *** just use the last rHL that was updated rHL_k = rHL_k(k_last_offset) end end end 5.-1. "SetDBoundsStd" (MoochoPack::SetDBoundsStd_AddedStep) *** Set the bounds on d d_bounds_k.l = xl - x_k d_bounds_k.u = xu - x_k 5. "TangentialStep" (MoochoPack::QPFailureReinitReducedHessian_Step) do null space step : MoochoPack::TangentialStepWithInequStd_Step *** Calculate the null-space step by solving a constrained QP qp_grad_k = rGf_k if w_k is updated then set qp_grad_k = qp_grad_k + zeta_k * w_k bl = dl_k - Ypy_k bu = du_k - Ypy_k etaL = 0.0 *** Determine if we can use simple bounds on pz or not if num_bounded(bl_k(var_dep),bu_k(var_dep)) > 0 then bounded_var_dep = true else bounded_var_dep = false end if( m==0 or ( Z_k is a variable reduction null space matrix and ( ||Ypy_k(var_indep)||inf == 0 or bounded_var_dep==false ) ) ) then use_simple_pz_bounds = true else use_simple_pz_bounds = false end *** Setup QP arguments qp_g = qp_grad_k qp_G = rHL_k if (use_simple_pz_bounds == true) then qp_dL = bl(var_indep), qp_dU = bu(var_indep)) if (m > 0) then qp_E = Z_k.D, qp_b = Ypy_k(var_dep) qp_eL = bl(var_dep), qp_eU = bu(var_dep) end elseif (use_simple_pz_bounds == false) then qp_dL = -inf, qp_dU = +inf qp_E = Z_k, qp_b = Ypy_k qp_eL = bl, qp_eU = bu end if (m > r) then qp_F = V_k, qp_f = Uy_k*py_k + c_k(equ_undecomp) else qp_F = empty, qp_f = empty end Given active-set statistics (act_set_stats_km1) frac_same = max(num_active-num_adds-num_drops,0)/(num_active) Use a warm start when frac_same >= warm_start_frac Solve the following QP to compute qp_d, qp_eta, qp_Ed = qp_E * qp_d ,qp_nu, qp_mu and qp_lambda (ConstrainedOptPack::QPSolverRelaxedQPKWIK): min qp_g' * qp_d + 1/2 * qp_d' * qp_G * qp_d + M(eta) qp_d <: R^(n-r) s.t. etaL <= eta qp_dL <= qp_d <= qp_dU [qp_nu] qp_eL <= qp_E * qp_d + (1-eta)*qp_b <= qp_eU [qp_mu] qp_F * d_qp + (1-eta) * qp_f = 0 [qp_lambda] if (qp_testing==QP_TEST) or (fd_deriv_testing==QP_TEST_DEFAULT and check_results==true) then Check the optimality conditions of the above QP if the optimality conditions do not check out then set throw_qp_failure = true end end *** Set the solution to the QP subproblem pz_k = qp_d eta_k = qp_eta if (use_simple_pz_bounds == true) then nu_k(var_dep) = qp_mu, nu_k(var_indep) = qp_nu Zpz_k(var_dep) = qp_Ed, Zpz_k(var_indep) = pz_k elseif (use_simple_pz_bounds == false) then nu_k = qp_mu Zpz_k = qp_Ed end if m > r then lambda_k(equ_undecomp) = qp_lambda end if (eta_k > 0) then set Ypy_k = (1-eta_k) * Ypy_k if QP solution is suboptimal then throw_qp_failure = true elseif QP solution is primal feasible (not optimal) then throw_qp_failure = primal_feasible_point_error elseif QP solution is dual feasible (not optimal) then find max u s.t. dl_k <= u*(Ypy_k+Zpz_k) <= du_k alpha_k = u throw_qp_failure = dual_feasible_point_error end if (eta_k == 1.0) then The constraints are infeasible! throw InfeasibleConstraints(...) end if (throw_qp_failure == true) then throw QPFailure(...) end end null space step if QPFailure was thrown then if QP failed already then rethrow QPFailure end if k > max_iter then terminate the algorithm! end set all rHL_{k} to not updated goto ReducedHessian end 6. "CalcDFromYPYZPZ" (MoochoPack::CalcDFromYPYZPZ_Step) *** Calculates the search direction d from Ypy and Zpz d_k = Ypy_k + Zpz_k 7. "CalcReducedGradLagrangian" (MoochoPack::CalcReducedGradLagrangianStd_AddedStep) *** Evaluate the reduced gradient of the Lagrangian if nu_k is updated and nu_k.nz() > 0 then rGL_k = Z_k' * (Gf_k + nu_k) + Uz_k' * lambda_k(equ_undecomp) else rGL_k = rGf_k + Uz_k' * lambda_k(equ_undecomp) end 8. "CheckConvergence" (MoochoPack::CheckConvergenceStd_AddedStep) *** Check to see if the KKT error is small enough for convergence if scale_(opt|feas|comp)_error_by == SCALE_BY_ONE then scale_(opt|feas|comp)_factor = 1.0 else if scale_(opt|feas|comp)_error_by == SCALE_BY_NORM_2_X then scale_(opt|feas|comp)_factor = 1.0 + norm_2(x_k) else if scale_(opt|feas|comp)_error_by == SCALE_BY_NORM_INF_X then scale_(opt|feas|comp)_factor = 1.0 + norm_inf(x_k) end if scale_opt_error_by_Gf == true then opt_scale_factor = 1.0 + norm_inf(Gf_k) else opt_scale_factor = 1.0 end opt_err = norm_inf(rGL_k)/opt_scale_factor feas_err = norm_inf(c_k) comp_err = max(i, nu(i)*(xu(i)-x(i)), -nu(i)*(x(i)-xl(i))) opt_kkt_err_k = opt_err/scale_opt_factor feas_kkt_err_k = feas_err/scale_feas_factor comp_kkt_err_k = feas_err/scale_comp_factor if d_k is updated then step_err = max( |d_k(i)|/(1+|x_k(i)|), i=1..n ) else step_err = 0 end if opt_kkt_err_k < opt_tol and feas_kkt_err_k < feas_tol and step_err < step_tol then report optimal x_k, lambda_k and nu_k to the nlp terminate, the solution has beed found! end 9.-1. "LineSearchFullStep" (MoochoPack::LineSearchFullStep_Step) if alpha_k is not updated then alpha_k = 1.0 end x_kp1 = x_k + alpha_k * d_k f_kp1 = f(x_kp1) if m > 0 then c_kp1 = c(x_kp1) 9. "LineSearch" (MoochoPack::LineSearchFailureNewDecompositionSelection_Step) do line search step : MoochoPack::LineSearchFilter_Step *** Filter line search method # Assumes initial d_k & alpha_k (0-1) is known and # x_kp1, f_kp1, c_kp1 and h_kp1 are calculated for that alpha_k Gf_t_dk = <Gf,dk> theta_k = norm_1(c_k)/c_k.dim() theta_small = theta_small_fact*max(1.0,theta_k) if f_min != F_MIN_UNBOUNDED then gamma_f_used = gamma_f * (f_k-f_min)/theta_k else gamma_f_used = gamma_f end if Gf_t_dk < 0 then alpha_min = min(gamma_theta, gamma_f_used*theta_k/(-Gf_t_dk)) if theta_k <= theta_small then alpha_min = min(alpha_min, delta_*(theta_k^s_theta)/((-Gf_t_dk)^s_f)) end else alpha_min = gamma_theta end alpha_min = alpha_min*gamma_alpha # Start the line search accepted = false augment = false while alpha > alpha_min and accepted = false then accepted = true if any values in x_kp1, f_kp1, c_kp1, h_kp1 are nan or inf then accepted = false end # Check filter if accepted = true then theta_kp1 = norm_1(c_kp1)/c_kp1.dim() for each pt in the filter do if theta_kp1 >= pt.theta and f_kp1 >= pt.f then accepted = false break for end next pt end #Check Sufficient Decrease if accepted = true then # if switching condition is satisfied, use Armijo on f if theta_k < theta_small and Gf_t_dk < 0 and ((-Gf_t_dk)^s_f)*alpha_k < delta*theta_k^s_theta then if f_kp1 <= f_k + eta_f*alpha_k*Gf_t_dk then accepted = true end else # Verify factional reduction if theta_kp1 < (1-gamma_theta)*theta_k or f_kp1 < f_k - gamma_f_used*theta_k then accepted = true augment = true end end end if accepted = false then alpha_k = alpha_k*back_track_frac x_kp1 = x_k + alpha_k * d_k f_kp1 = f(x_kp1) c_kp1 = c(x_kp1) h_kp1 = h(x_kp1) end end while if accepted = true then if augment = true then Augment the filter (use f_with_boundary and theta_with_boundary) end else goto the restoration phase end end line search step if thrown line_search_failure then if line search failed at the last iteration also then throw line_search_failure end new decomposition selection : MoochoPack::NewDecompositionSelectionStd_Strategy if k > max_iter then terminate the algorithm end Select a new basis at current point x_kp1 = x_k alpha_k = 0 k=k+1 goto EvalNewPoint end new decomposition selection end 10. "Major Loop" : if k >= max_iter then terminate the algorithm elseif run_time() >= max_run_time then terminate the algorithm else k = k + 1 goto 1 end *************************************************************** Warning, the following options groups where not accessed. An options group may not be accessed if it is not looked for or if an "optional" options group was looked from and the user spelled it incorrectly: