Next: , Previous: , Up: Fitting a Model   [Contents]


3.3 Minimising the chi-square Function

Once the input function of your Fitter object is properly initialised, you can fit a model object model to some input data idata with the methods

int Fitter::local_fit(Model& model, const ObservableVector& idata)
int Fitter::global_fit(Model& model, const ObservableVector& idata)

These methods take the values in idata as measured values of the observables and then minimise the input function. They return zero if the fit was successful and a non-zero value otherwise. If the last argument is omitted, the default input values stored in the Fitter::input_function() object are used. After a successful fit, the parameters of the model object (as returned by the Model::parameter method) are the best-fit parameters, the observables (as returned by the Model::observable method) are the values of the observables at the best-fit point, the chi-square value (as returned by Model::chisquare()) is set to the minimal chi-square value and the constraint penalty value (as returned by Model::constraint_penalty()) is the value of the constraint penalty at the best-fit point (without the constraint penalty factor, see Non-linear Constraints). In the notation of [arXiv:1207.1446v2], and for given input data x_0 and parameters \xi, the chi-square function is related to the input function D by

\chi^2(\xi) = D(\tilde{x}(\xi),x_0) - D(x_0,x_0) ,

where \tilde x(\xi) are the observables predicted by the model for the parameters \xi. The same relation holds for the constraint penalty \chi^2_c (see Non-linear Constraints).

The difference between the two fitting methods is that local_fit uses the current parameter values of the model object as starting point for the minimisation while global_fit looks through the dictionary of the model object (see Finding the Right Starting Point) to find the best starting point.

To only calculate the chi-square value for the current parameter values (without any minimisation) you can use the method

int Fitter::calc(Model& model, const ObservableVector& cvals)

It sets model.chisquare() to the computed chi-square value, model.constraint_penalty() to the computed constraint penalty value and returns zero if the calculation was successful and a non-zero value otherwise. The values in cvals are used as experimental inputs for the observables. If the cvals argument is omitted the default values in input_function().central_values() are used. The Fitter class also provides methods to compute the chi-square and constraint penalty values due to some subset of observables. The corresponding methods are

int Fitter::calc_contrib(const IndexVector& iv, Model& model,
                         const ObservableVector& cvals)
int Fitter::calc_contrib(int i, Model& model,
                         const ObservableVector& cvals)
int Fitter::calc_contrib(int i1, int i2, Model& model,
                         const ObservableVector& cvals)

In the first prototype the observables whose contributions should be included in the computation of the chi-square and the constraint penalty can be specified with an IndexVector object containing the corresponding observable indices. If the contribution of only one observable ist needed you can supply the index of that observable to the second prototype. The third prototype calculates the contributions of all observables with index between (and including) i1 and i2. Like the Fitter::calc method, the above methods set model.chisquare() to the computed chi-square value, model.constraint_penalty() to the computed constraint penalty value, use cvals as central values for the experimental observables or input_function().central_values() if the cvals argument is omitted and return zero if the calculation was successful and a non-zero value otherwise. Note that, if your input function contain a component which depends on several observables (like CorrelatedGaussianIC components) myFitter has no way of separating the contributions from these observables. In this case the full contribution of that component is included as soon as at least one of these observables is requested in the call to Fitter::calc_contrib.

For the actual minimisations myFitter uses a custom implementation of the BFGS algorithm. You can tune the parameters for this algorithm with several Fitter methods:

int minimizer_verbosity()
void minimizer_verbosity(int n)

These methods return or set the verbosity level for minimisations. The default value is zero, in which case no information is displayed during a minimisation. Values of 1 to 3 will print increasing amounts of information to std::cout.

double minimizer_line_search_precision()
void minimizer_line_search_precision(double p)

These methods return or set the precision for one-dimensional minimsations. The default setting of 0.1 is usually sufficient.

double minimizer_precision()
void minimizer_precision(double p)

These methods return or set the precision of the minimiser. If the norm of the gradient of the chi-square function drops below minimizer_precision() the minimisation is considered successful. Remember that internally all parameters are normalised to their scale (see Model::scale), so that derivatives of the chi-square function with respect to the parameters are multiplied with the scale of the corresponding parameter. Thus, unreasonably small values for scales of the parameters can lead to a premature termination of the minimisation. The default setting is 0.001.

int minimizer_iterations()
void minimizer_iterations(int n)

These methods return or set the maximum number of iterations for a chi-square minimisation. If the maximum number of iterations is exceeded, the minimisation is aborted and the status GSL_EMAXITER (defined in the header gsl/gsl_errno.h) is returned. The default setting is 200.

bool minimizer_keep_hessian()
void minimizer_keep_hessian(bool b)

If this is set to true the BFGS estimate for the Hessian matrix from the previous minimisation is used as starting point for the next minimisation. A common application for this feature is minimisation with non-linear constraints (see Non-linear Constraints). In this case convergence problems can be dealt with by first doing a minimisation with a low constraint penalty factor, then increasing the factor and re-starting the minimisation in the state where it terminated. For the default setting of false the Hessian matrix is set to the identity matrix at the start of each minimisation.


Next: , Previous: , Up: Fitting a Model   [Contents]