Next: Minimising the chi-square Function, Previous: Specifying Experimental Inputs, Up: Fitting a Model [Contents]
In some cases you may need to minimize the chi-square function under constraints of the form g_i(\xi)=c_i where \xi are the parameters of your model, i is an index, g_i are some real-valued functions and c_i constants. One possibility to implement such constraints is to add penalty terms to the chi-square function which become large for g_i(\xi) \neq c_i. Of course the chi-square function then looses its statistical interpretation, and one should remove the penalty terms from the chi-square value when computing p-values and confidence intervals.
myFitter supports this approach to non-linear constraints by
keeping the penalty terms in the chi-square separate from the ordinary
terms. To implement non-linear constraints g_i(\xi)=c_i you
first have to implement the functions g_i as an observables in
your Model
class. Then you add inputs for these observables to
the input_function()
member of your Fitter
object with
the method
int InputFunction::add_constraint(const InputComponent&)
You can use any of the InputComponent
classes discussed above.
Usually GaussianIC
instances with central values c_i
small errors will do the job. Calls to Fitter::global_fit
or
Fitter::local_fit
(see Minimising the chi-square Function)
will then minimize the value of \chi^2 + f\chi^2_c, where
\chi^2 is the sum of all contributions added via
InputFunction::add
, \chi^2_c is the sum of all
contributions added via InputFunction::add_constraint
and
f is the constraint penalty factor which can be accessed
with the methods
double InputFunction::constraint_penalty_factor() void InputFunction::constraint_penalty_factor(double f)
Thus, by varying the constraint penalty factor you can change the weight of the constraint penalties in the overall fit. The factor must be tuned on a case by case basis. If it is too small the fit will violate the constraints in order to decrease the value of \chi^2. If it is too big your fits will not converge because the curvature of the objective function is too large. The default setting is 1.
Instead of changing the constraint penalty factor you can also change
the errors of the input components added via
LikelihoodFunction::add_constraint
. In this sense, the
constraint penalty factor is just a convenient way of scaling the
contributions of all constaint penalties simultaneously. Note,
however, that the constraint penalty value written to
Model::constraint_penalty
by the functions
Fitter::global_fit
and Fitter::local_fit
(see Minimising the chi-square Function) is \chi^2_c,
i.e. does not include the constraint penalty factor.
To implement constraints of the form c_1 < g(\xi) < c_2 you can
pass a GaussianIC
instance with non-zero systematic errors to
InputFunction::add_constraint
. If MyModel::O_G
is the
index associated with the observable g, c1 and c2
the lower and upper bounds and dg the small error you can do
the following:
fitter.input_function().add_constraint( GaussianIC(MyModel::O_G, dg, (c1+c2)/2, (c2-c1)/2));
Alternatively, you can define an observable h(\xi) in such a way that it is only non-zero when g(\xi) < c_1 or g(\xi) > c_1 and add a Gaussian constraint on h with a central value of 0.
Next: Minimising the chi-square Function, Previous: Specifying Experimental Inputs, Up: Fitting a Model [Contents]