Modeling Data and Curve Fitting¶
A common use of least-squares minimization is curve fitting, where one has a
parametrized model function meant to explain some phenomena and wants to adjust
the numerical values for the model so that it most closely matches some
data. With scipy
, such problems are typically solved with
scipy.optimize.curve_fit, which is a wrapper around
scipy.optimize.least_squares. While minimize()
can be used for curve-fitting problems, it is more general and not aimed
specifically at this common use-case.
The Model
class in lmfit provides a simple and flexible approach to
curve-fitting problems. Like scipy.optimize.curve_fit, a Model
uses a model function – a function that is meant to calculate a model for
some phenomenon – and then uses that to best match an array of supplied
data. Beyond that similarity, the Model
interface is rather different
from scipy.optimize.curve_fit. These differences include a) being
class-based and so having multiple methods for working with Model`s,
b) using :class:`~lmfit.parameter.Parameters
, and all of the advantages they
provide, and c) being easy to combine into composite models.
In addition to allowing you to turn any model function into a curve-fitting
model, lmfit also provides canonical definitions for many known line shapes
such as Gaussian or Lorentzian peaks and Exponential decays that are widely
used in many scientific domains. These are available in the models
module that will be discussed in more detail in the next chapter
(Built-in Fitting Models in the models module). We mention it here as you may want to
consult that list before writing your own model. For now, we focus on
turning Python functions into high-level fitting models with the
Model
class, and using these to fit data.
Motivation and simple example: Fit data to Gaussian profile¶
We start with a simple and common example of fitting data to a Gaussian
peak. As we will see, there is a built-in GaussianModel
class that can
help do this, but here we’ll build our own. We start with a simple definition
of the model function:
from numpy import exp, linspace, random
def gaussian(x, amp, cen, wid):
return amp * exp(-(x-cen)**2 / wid)
With that function, we can easily calculate a model for our data, and the goal
here is to use this function to fit to data \(y(x)\) represented by the
arrays y
and x
. With scipy.optimize.curve_fit, this would be:
from scipy.optimize import curve_fit
x = linspace(-10, 10, 101)
y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)
init_vals = [1, 0, 1] # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)
That is, we create data (maybe adding a little noise), make an initial guess of the model values, and run scipy.optimize.curve_fit with the model function, data arrays, and initial guesses. The results returned are the optimal values for the parameters and the covariance matrix.
With lmfit, we create a Model
that wraps the gaussian
model
function, which automatically generates the appropriate residual function,
and determines the corresponding parameter names from the function
signature itself:
from lmfit import Model
gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')
parameter names: ['amp', 'cen', 'wid']
independent variables: ['x']
As you can see, the Model gmodel
determined the names of the parameters and
the independent variables. By default, the first argument of the function is
taken as the independent variable, held in independent_vars
, and the
rest of the functions positional arguments (and, in certain cases, keyword
arguments – see below) are used for Parameter names. Thus, for the
gaussian
function above, the independent variable is x
, and the
parameters are named amp
, cen
, and wid
. These are all taken
directly from the signature of the model function. As discussed below, you can
modify the default assignment of independent variable / arguments and specify
yourself what the independent variable.
Although the model determines what the parameters should be named,
Parameters
are not created when the model is
created. That is, the model knows the parameters names, but nothing about the
scale and range of your data. You will have to make Parameters for a Model
yourself. To help you create Parameters for a Model, each model has a
make_params()
method that will generate parameters with the
expected names. You can use this method or create Parameters some other way
(say, with create_params()
), but you will have to create
Parameters and assign initial values for all Parameters. You can also assign
other attributes when doing this:
params = gmodel.make_params()
This creates the Parameters
but does not
automatically give them initial values since it has no idea what the scale
should be. If left unspecified, the initial values will be -Inf
, which will
generally fail to give useful results. You can set initial values for
parameters with keyword arguments to make_params()
:
params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)
or assign them (and other parameter properties) after the
Parameters
class has been created.
A Model
has several methods associated with it. For example, one
can use the eval()
method to evaluate the model or the fit()
method to fit data to this model with a Parameter
object. Both of
these methods can take explicit keyword arguments for the parameter values.
For example, one could use eval()
to calculate the predicted
function:
x_eval = linspace(0, 10, 201)
y_eval = gmodel.eval(params, x=x_eval)
or with:
y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)
Admittedly, this a slightly long-winded way to calculate a Gaussian
function, given that you could have called your gaussian
function
directly. But now that the model is set up, we can use its fit()
method to fit this model to data, as with:
result = gmodel.fit(y, params, x=x)
or with:
result = gmodel.fit(y, x=x, cen=0.5, amp=10, wid=2.0)
Putting everything together, included in the examples
folder with the
source code, is:
# <examples/doc_model_gaussian.py>
import matplotlib.pyplot as plt
from numpy import exp, loadtxt, pi, sqrt
from lmfit import Model
data = loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]
def gaussian(x, amp, cen, wid):
"""1-d gaussian: gaussian(x, amp, cen, wid)"""
return (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))
gmodel = Model(gaussian)
result = gmodel.fit(y, x=x, amp=5, cen=5, wid=1)
print(result.fit_report())
plt.plot(x, y, 'o')
plt.plot(x, result.init_fit, '--', label='initial fit')
plt.plot(x, result.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_model_gaussian.py>
which is pretty compact and to the point. The returned result
will be
a ModelResult
object. As we will see below, this has many
components, including a fit_report()
method, which will show:
[[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 33
# data points = 101
# variables = 3
chi-square = 3.40883599
reduced chi-square = 0.03478404
Akaike info crit = -336.263713
Bayesian info crit = -328.418352
R-squared = 0.98533348
[[Variables]]
amp: 8.88021893 +/- 0.11359522 (1.28%) (init = 5)
cen: 5.65866102 +/- 0.01030495 (0.18%) (init = 5)
wid: 0.69765478 +/- 0.01030505 (1.48%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, wid) = +0.5774
As the script shows, the result will also have init_fit
for the fit
with the initial parameter values and a best_fit
for the fit with
the best fit parameter values. These can be used to generate the following
plot:
which shows the data in blue dots, the best fit as a solid green line, and the initial fit as a dashed orange line.
Note that the model fitting was really performed with:
gmodel = Model(gaussian)
result = gmodel.fit(y, params, x=x, amp=5, cen=5, wid=1)
These lines clearly express that we want to turn the gaussian
function
into a fitting model, and then fit the \(y(x)\) data to this model,
starting with values of 5 for amp
, 5 for cen
and 1 for wid
. In
addition, all the other features of lmfit are included:
Parameters
can have bounds and constraints and
the result is a rich object that can be reused to explore the model fit in
detail.
The Model
class¶
The Model
class provides a general way to wrap a pre-defined
function as a fitting model.
- class Model(func, independent_vars=None, param_names=None, nan_policy='raise', prefix='', name=None, **kws)¶
Create a model from a user-supplied model function.
The model function will normally take an independent variable (generally, the first argument) and a series of arguments that are meant to be parameters for the model. It will return an array of data to model some data as for a curve-fitting problem.
- Parameters:
func (callable) – Function to be wrapped.
independent_vars (
list
ofstr
, optional) – Arguments to func that are independent variables (default is None).param_names (
list
ofstr
, optional) – Names of arguments to func that are to be made into parameters (default is None).nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and missing values in data. See Notes below.
prefix (str, optional) – Prefix used for the model.
name (str, optional) – Name for the model. When None (default) the name is the same as the model function (func).
**kws (dict, optional) – Additional keyword arguments to pass to model function.
Notes
1. The model function must return an array that will be the same size as the data being modeled.
2. Parameter names are inferred from the function arguments by default, and a residual function is automatically constructed.
3. Specifying independent_vars here will explicitly name the independent variables for the Model. in contrast, param_names is meant to help infer Parameter names for keyword arguments defined with
**kws
in the Model function.4. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
‘raise’ : raise a ValueError (default)
‘propagate’ : do nothing
‘omit’ : drop missing data
Examples
The model function will normally take an independent variable (generally, the first argument) and a series of arguments that are meant to be parameters for the model. Thus, a simple peak using a Gaussian defined as:
>>> import numpy as np >>> def gaussian(x, amp, cen, wid): ... return amp * np.exp(-(x-cen)**2 / wid)
can be turned into a Model with:
>>> gmodel = Model(gaussian)
this will automatically discover the names of the independent variables and parameters:
>>> print(gmodel.param_names, gmodel.independent_vars) ['amp', 'cen', 'wid'], ['x']
Model
class Methods¶
- Model.eval(params=None, **kwargs)¶
Evaluate the model with supplied parameters and keyword arguments.
- Parameters:
params (Parameters, optional) – Parameters to use in Model.
**kwargs (optional) – Additional keyword arguments to pass to model function.
- Returns:
Value of model given the parameters and other arguments.
- Return type:
numpy.ndarray, float, int or complex
Notes
1. if params is None, the values for all parameters are expected to be provided as keyword arguments.
2. If params is given, and a keyword argument for a parameter value is also given, the keyword argument will be used in place of the value in the value in params.
3. all non-parameter arguments for the model function, including all the independent variables will need to be passed in using keyword arguments.
4. The return types are generally numpy.ndarray, but may depends on the model function and input independent variables. That is, return values may be Python float, int, or complex values.
- Model.fit(data, params=None, weights=None, method='leastsq', iter_cb=None, scale_covar=True, verbose=False, fit_kws=None, nan_policy=None, calc_covar=True, max_nfev=None, coerce_farray=True, **kwargs)¶
Fit the model to the data using the supplied Parameters.
- Parameters:
data (array_like) – Array of data to be fit.
params (Parameters, optional) – Parameters to use in fit (default is None).
weights (array_like, optional) – Weights to use for the calculation of the fit residual [i.e., weights*(data-fit)]. Default is None; must have the same size as data.
method (str, optional) – Name of fitting method to use (default is ‘leastsq’).
iter_cb (callable, optional) – Callback function to call at each iteration (default is None).
scale_covar (bool, optional) – Whether to automatically scale the covariance matrix when calculating uncertainties (default is True).
verbose (bool, optional) – Whether to print a message when a new parameter is added because of a hint (default is True).
fit_kws (dict, optional) – Options to pass to the minimizer being used.
nan_policy ({'raise', 'propagate', 'omit'}, optional) – What to do when encountering NaNs when fitting Model.
calc_covar (bool, optional) – Whether to calculate the covariance matrix (default is True) for solvers other than ‘leastsq’ and ‘least_squares’. Requires the
numdifftools
package to be installed.max_nfev (int or None, optional) – Maximum number of function evaluations (default is None). The default value depends on the fitting method.
coerce_farray (bool, optional) – Whether to coerce data and independent data to be ndarrays with dtype of float64 (or complex128). If set to False, data and independent data are not coerced at all, but the output of the model function will be. (default is True)
**kwargs (optional) – Arguments to pass to the model function, possibly overriding parameters.
- Return type:
Notes
1. if params is None, the values for all parameters are expected to be provided as keyword arguments. Mixing params and keyword arguments is deprecated (see Model.eval).
2. all non-parameter arguments for the model function, including all the independent variables will need to be passed in using keyword arguments.
3. Parameters are copied on input, so that the original Parameter objects are unchanged, and the updated values are in the returned ModelResult.
Examples
Take
t
to be the independent variable and data to be the curve we will fit. Use keyword arguments to set initial guesses:>>> result = my_model.fit(data, tau=5, N=3, t=t)
Or, for more control, pass a Parameters object.
>>> result = my_model.fit(data, params, t=t)
- Model.guess(data, x, **kws)¶
Guess starting values for the parameters of a Model.
This is not implemented for all models, but is available for many of the built-in models.
- Parameters:
data (array_like) – Array of data (i.e., y-values) to use to guess parameter values.
x (array_like) – Array of values for the independent variable (i.e., x-values).
**kws (optional) – Additional keyword arguments, passed to model function.
- Returns:
Initial, guessed values for the parameters of a Model.
- Return type:
- Raises:
NotImplementedError – If the guess method is not implemented for a Model.
Notes
Should be implemented for each model subclass to run self.make_params(), update starting values and return a Parameters object.
Changed in version 1.0.3: Argument
x
is now explicitly required to estimate starting values.
- Model.make_params(verbose=False, **kwargs)¶
Create a Parameters object for a Model.
- Parameters:
verbose (bool, optional) – Whether to print out messages (default is False).
**kwargs (optional) –
- Parameter names and initial values or dictionaries of
values and attributes.
- Returns:
params – Parameters object for the Model.
- Return type:
Notes
Parameter values can be numbers (floats or ints) to set the parameter value, or dictionaries with any of the following keywords:
value
,vary
,min
,max
,expr
,brute_step
,is_init_value
to set those parameter attributes.This method will also apply any default values or parameter hints that may have been defined for the model.
Example
>>> gmodel = GaussianModel(prefix='peak_') + LinearModel(prefix='bkg_') >>> gmodel.make_params(peak_center=3200, bkg_offset=0, bkg_slope=0, ... peak_amplitdue=dict(value=100, min=2), ... peak_sigma=dict(value=25, min=0, max=1000))
- Model.set_param_hint(name, **kwargs)¶
Set hints to use when creating parameters with make_params().
The given hint can include optional bounds and constraints
(value, vary, min, max, expr)
, which will be used by Model.make_params() when building default parameters.While this can be used to set initial values, Model.make_params or the function create_params should be preferred for creating parameters with initial values.
The intended use here is to control how a Model should create parameters, such as setting bounds that are required by the mathematics of the model (for example, that a peak width cannot be negative), or to define common constrained parameters.
- Parameters:
name (str) – Parameter name, can include the models prefix or not.
**kwargs (optional) –
Arbitrary keyword arguments, needs to be a Parameter attribute. Can be any of the following:
- valuefloat, optional
Numerical Parameter value.
- varybool, optional
Whether the Parameter is varied during a fit (default is True).
- minfloat, optional
Lower bound for value (default is
-numpy.inf
, no lower bound).
- maxfloat, optional
Upper bound for value (default is
numpy.inf
, no upper bound).
- exprstr, optional
Mathematical expression used to constrain the value during the fit.
Example
>>> model = GaussianModel() >>> model.set_param_hint('sigma', min=0)
- Model.print_param_hints(colwidth=8)¶
Print a nicely aligned text-table of parameter hints.
- Parameters:
colwidth (int, optional) – Width of each column, except for first and last columns.
- Model.post_fit(fitresult)¶
function that is called just after fit, can be overloaded by subclasses to add non-fitting ‘calculated parameters’
See Using uncertainties in the fitted parameters for post-fit calculations.
Model
class Attributes¶
- func¶
The model function used to calculate the model.
- independent_vars¶
List of strings for names of the independent variables.
- nan_policy¶
Describes what to do for NaNs that indicate missing values in the data. The choices are:
'raise'
: Raise aValueError
(default)'propagate'
: Do not check for NaNs or missing values. The fit will try to ignore them.'omit'
: Remove NaNs or missing observations in data. If pandas is installed,pandas.isnull()
is used, otherwisenumpy.isnan()
is used.
- name¶
Name of the model, used only in the string representation of the model. By default this will be taken from the model function.
- opts¶
Extra keyword arguments to pass to model function. Normally this will be determined internally and should not be changed.
- param_hints¶
Dictionary of parameter hints. See Using parameter hints.
- param_names¶
List of strings of parameter names.
- prefix¶
Prefix used for name-mangling of parameter names. The default is
''
. If a particularModel
has argumentsamplitude
,center
, andsigma
, these would become the parameter names. Using a prefix of'g1_'
would convert these parameter names tog1_amplitude
,g1_center
, andg1_sigma
. This can be essential to avoid name collision in composite models.
Determining parameter names and independent variables for a function¶
The Model
created from the supplied function func
will guess which
function arguments of the model function should be made into
Parameters
object, and which should be considered
non-varying independent variables that need to be specified when evaluating
or fitting with the Model.
By convention and by default, the first argument to the model function is taken
to be the independent variable – this is often the x
value for the model.
You can explicitly set this to be a different functional argument, and will
need to do so if the independent variable is not first in the list, or if there
is more than one independent variable. Since curve fitting most commonly fits
some function \(y(x)\), it is common for the independent variable to be a
numpy float ndarray of the same length as the data to be fit. While this is
common, it is not actually required. The independent variable does not not
need to be an ndarray, and can be any Python data type, and it is fine to
have multiple independent variables.
Changed in version 1.2.3.
By convention and default, the positional arguments (that is, those without
default values specified in the function signature) other than the first
argument are marked as being Parameters – these will have an invalid default
value (or -Inf
) that must be supplied before using the Parameter. Keyword
arguments to the model function that have numerical values will also be marked
as being Parameters, and the supplied numerical value will be used as the
default value for that Parameter.
Keyword arguments to the model function that have non-numerical values
(including None
, False
, and True
, but also strings) will be marked
as being independent variables and their default value will be stored and used
unless overwritten. There is some ambiguity for function arguments that have a
value supplied in the function signature of None
, False
, or True
.
These function arguments are tagged as independent variables, but can be made
into a Parameter with Model.make_params()
, in which case, they will be
treated as variables. An example is given below.
Explicitly specifying independent_vars
¶
As we saw for the Gaussian example above, creating a Model
from a
function is fairly easy. Let’s try another one:
import numpy as np
from lmfit import Model
def decay(t, tau, N):
return N*np.exp(-t/tau)
decay_model = Model(decay)
print(f'independent variables: {decay_model.independent_vars}')
params = decay_model.make_params()
print('\nParameters:')
for pname, par in params.items():
print(pname, par)
independent variables: ['t']
Parameters:
tau <Parameter 'tau', value=-inf, bounds=[-inf:inf]>
N <Parameter 'N', value=-inf, bounds=[-inf:inf]>
Here, t
is assumed to be the independent variable because it is the first
argument to the function. The other function arguments are used to create
parameters for the model.
If you want tau
to be the independent variable in the above example,
you can say so:
decay_model = Model(decay, independent_vars=['tau'])
print(f'independent variables: {decay_model.independent_vars}')
params = decay_model.make_params()
print('\nParameters:')
for pname, par in params.items():
print(pname, par)
independent variables: ['tau']
Parameters:
t <Parameter 't', value=-inf, bounds=[-inf:inf]>
N <Parameter 'N', value=-inf, bounds=[-inf:inf]>
You can also supply multiple values for multi-dimensional functions with multiple independent variables. In fact, the meaning of independent variable here is simple, and based on how it treats arguments of the function you are modeling:
- independent variable
A function argument that is not a parameter or otherwise part of the model, and that will be required to be explicitly provided as a keyword argument for each fit with
Model.fit()
or evaluation withModel.eval()
.
Note that independent variables are not required to be arrays, or even floating point numbers. Dictionaries or Objects from user-defined classes can be independent variables.
Functions with keyword arguments¶
If the model function had keyword parameters, these would be turned into Parameters if the supplied default value was a valid number:
def decay2(t, tau, N=10, check_positive=False):
if check_positive:
arg = abs(t)/max(1.e-9, abs(tau))
else:
arg = t/tau
return N*np.exp(arg)
mod = Model(decay2)
print(f'independent variables: {mod.independent_vars}')
params = mod.make_params()
print('Parameters:')
for pname, par in params.items():
print(pname, par)
independent variables: ['t', 'check_positive']
Parameters:
tau <Parameter 'tau', value=-inf, bounds=[-inf:inf]>
N <Parameter 'N', value=10, bounds=[-inf:inf]>
Here, check_positive
is listed as an independent variable, because it
has a default value of False
.
The function argument N
is expected to be an Parameter, because it has a
numerical default value. This value will be kept as the default value when
building the Parameters. Also, note that the default value for tau
is
-np.inf
, which will need to be fixed before really be used.
Function arguments with default values of None
, False
, and True
are
slightly ambiguous whether they should be considered independent variables or
Parameters. That is, in many contexts (including basic arithmetic) True
acts like the integer 1, and False
acts like 0. A default value of
None
might be used to signal some default behavior. For example, the
builtin lmfit.lineshapes.voigt()
function is defined as
def voigt(x, amplitude=1.0, center=0.0, sigma=1.0, gamma=None):
"""Return a 1-dimensional Voigt function.
...
"""
if gamma is None:
gamma = sigma
z = (x-center + 1j*gamma) / max(tiny, (sigma*s2))
return amplitude*real(wofz(z)) / max(tiny, (sigma*s2pi))
mod = Model(voigt)
print(f'independent variables: {mod.independent_vars}')
params = mod.make_params(amplitude=10, center=5, sigma=2)
# or
params2 = mod.make_params(amplitude=10, center=5, sigma=2, gamma=1.5)
independent variables: ['x', 'gamma']
In this case, gamma
will be listed as an independent variable. But because
its default value is None
(or True
or False
) it can be made a
Parameter that can be varied independently as shown above. Of course, whether
this makes sense depends on the details of the implementation of the model
function: it can be sensible (and even intended) for gamma
in the voigt
function above to be a numerical value that could be a variable Parameter, but
it is not sensible for the check_positive
argument in decay2
to be
variable that can take any numeric value.
Defining a prefix
for the Parameters¶
As we will see below when combining models, it is sometimes necessary to
decorate the parameter names in the model, but still have them be correctly
used in the underlying model function. This would be necessary, for example, if
two parameters in a composite model (see Composite Models : adding (or multiplying) Models or
examples in the next chapter) would have the same name. To avoid this, we can
add a prefix
to the Model
which will automatically do this mapping
for us.
def myfunc(x, amplitude=1, center=0, sigma=1):
# function definition, for now just ``pass``
pass
mod = Model(myfunc, prefix='f1_')
params = mod.make_params()
print('Parameters:')
for pname, par in params.items():
print(pname, par)
Parameters:
f1_amplitude <Parameter 'f1_amplitude', value=1, bounds=[-inf:inf]>
f1_center <Parameter 'f1_center', value=0, bounds=[-inf:inf]>
f1_sigma <Parameter 'f1_sigma', value=1, bounds=[-inf:inf]>
You would refer to these parameters as f1_amplitude
and so forth, and
the model will know to map these to the amplitude
argument of myfunc
.
Initializing model parameter values¶
As mentioned above, creating a model does not automatically create the
corresponding Parameters
. These can be created with
either the create_params()
function, or the Model.make_params()
method of the corresponding instance of Model
.
When creating Parameters, each parameter is created with invalid initial value
of -Inf
if it is not set explicitly. That is to say, parameter values
must be initialized in order for the model to evaluate a finite result or
used in a fit. There are a few different ways to do this:
You can supply initial values in the definition of the model function.
You can initialize the parameters when creating parameters with
Model.make_params()
.You can create a Parameters object with
Parameters
orcreate_params()
.You can supply initial values for the parameters when calling
Model.eval()
orModel.fit()
methods.
Generally, using the Model.make_params()
method is recommended. The methods
described above can be mixed, allowing you to overwrite initial values at any point
in the process of defining and using the model.
Initializing values in the function definition¶
To supply initial values for parameters in the definition of the model function, you can simply supply a default value:
def myfunc(x, a=1, b=0):
return a*x + 10*a - b
instead of using:
def myfunc(x, a, b):
return a*x + 10*a - b
This has the advantage of working at the function level – all parameters with keywords can be treated as options. It also means that some default initial value will always be available for the parameter.
Initializing values with Model.make_params()
¶
When creating parameters with Model.make_params()
you can specify initial
values. To do this, use keyword arguments for the parameter names. You can
either set initial values as numbers (floats or ints) or as dictionaries with
keywords of (value
, vary
, min
, max
, expr
, brute_step
,
and is_init_value
) to specify these parameter attributes.
mod = Model(myfunc)
# simply supply initial values
pars = mod.make_params(a=3, b=0.5)
# supply initial values, attributes for bounds, etcetera:
pars_bounded = mod.make_params(a=dict(value=3, min=0),
b=dict(value=0.5, vary=False))
Creating a Parameters
object directly¶
You can also create your own Parameters directly using create_params()
.
This is independent of using the Model
class, but is essentially
equivalent to Model.make_params()
except with less checking of errors for
model prefixes and so on.
from lmfit import create_params
mod = Model(myfunc)
# simply supply initial values
pars = create_params(a=3, b=0.5)
# supply initial values and attributes for bounds, etc:
pars_bounded = create_params(a=dict(value=3, min=0),
b=dict(value=0.5, vary=False))
Because less error checking is done, Model.make_params()
should probably
be preferred when using Models.
Initializing parameter values for a model with keyword arguments¶
Finally, you can explicitly supply initial values when using a model. That
is, as with Model.make_params()
, you can include values as keyword
arguments to either the Model.eval()
or Model.fit()
methods:
x = linspace(0, 10, 100)
y_eval = mod.eval(x=x, a=7.0, b=-2.0)
y_sim = y_eval + random.normal(0, 0.2, x.size)
out = mod.fit(y_sim, pars, x=x, a=3.0, b=0.0)
These approaches to initialization provide many opportunities for setting
initial values for parameters. The methods can be combined, so that you
can set parameter hints but then change the initial value explicitly with
Model.fit()
.
Using parameter hints¶
After a model has been created, but prior to creating parameters with
Model.make_params()
, you can define parameter hints for that model. This
allows you to set other parameter attributes for bounds, whether it is varied in
the fit, or set a default constraint expression for a parameter. You can also
set the initial value, but that is not really the intention of the method,
which is to really to let you say that about the idealized Model, for example
that some values may not make sense for some parameters, or that some parameters
might be a small change from another parameter and so be fixed or constrained
by default.
To set a parameter hint, you can use Model.set_param_hint()
,
as with:
mod = Model(myfunc)
mod.set_param_hint('bounded_parameter', min=0, max=1.0)
pars = mod.make_params()
Parameter hints are discussed in more detail in section Using parameter hints.
Parameter hints are stored in a model’s param_hints
attribute,
which is simply a nested dictionary:
print('Parameter hints:')
for pname, par in mod.param_hints.items():
print(pname, par)
Parameter hints:
bounded_parameter {'min': 0, 'max': 1.0}
You can change this dictionary directly or use the Model.set_param_hint()
method. Either way, these parameter hints are used by Model.make_params()
when making parameters.
Parameter hints also allow you to create new parameters. This can be useful to make derived parameters with constraint expressions. For example to get the full-width at half maximum of a Gaussian model, one could use a parameter hint of:
mod = Model(gaussian)
mod.set_param_hint('wid', min=0)
mod.set_param_hint('fwhm', expr='2.3548*wid')
params = mod.make_params(amp={'value': 10, 'min':0.1, 'max':2000},
cen=5.5, wid=1.25)
params.pretty_print()
Name Value Min Max Stderr Vary Expr Brute_Step
amp 10 0.1 2000 None True None None
cen 5.5 -inf inf None True None None
fwhm 2.944 -inf inf None False 2.3548*wid None
wid 1.25 0 inf None True None None
With that definition, the value (and uncertainty) of the fwhm
parameter
will be reported in the output of any fit done with that model.
Data Types for data and independent data with Model
¶
The model as defined by your model function will use the independent
variable(s) you specify to best match the data you provide. The model is meant
to be an abstract representation for data, but when you do a fit with
Model.fit()
, you really need to pass in values for the data to be modeled
and the independent data used to calculate that data.
As discussed in Types of Data to Use for Fitting, the mathematical solvers used by
lmfit
all work exclusively with 1-dimensional numpy arrays of datatype
(dtype) “float64”. The value of the calculation (model-data)*weights
using
the calculation of your model function, and the data and weights you pass in
will always be coerced to an 1-dimensional ndarray with dtype “float64”
when it is passed to the solver. If it cannot be coerced, an error will occur
and the fit will be aborted.
That coercion will usually work for “array like” data that is not already a float64 ndarray. But, depending on the model function, the calculations within the model function may not always work well for some “array like” data types - especially independent data that are in list of numbers and ndarrays of type “float32” or “int16” or less precision.
To be clear, independent data for models using Model
are meant to be truly
independent, and not not required to be strictly numerical or objects that
are easily converted to arrays of numbers. The could, for example, be a
dictionary, an instance of a user-defined class, or other type of structured
data. You can use independent data any way you want in your model function.
But, as with almost all the examples given here, independent data is often also
a 1-dimensional array of values, say x
, and a simple view of the fit would
be to plot the data as y
as a function of x
. Again, this is not
required, but it is very common, especially for novice users.
By default, all data and independent data passed to Model.fit()
that is
“array like” - a list or tuple of numbers, a pandas.Series
, and
h5py.Dataset
, or any object that has an __array__()
method – will be
converted to a “float64” ndarray before the fit begins. If the array-like data
is complex, it will be converted to a “complex128” ndarray, which will always
work too. This conversion before the fit begins ensures that the model
function sees only “float64 ndarrays”, and nearly guarantees that data type
conversion will not cause problems for the fit. But it also means that if you
have passed a pandas.Series
as data or independent data, not all of the
methods or attributes of that Series
will be available by default within
the model function.
Added in version 1.2.2.
This coercion can be turned of with the coerce_farray
option to
Model.fit()
. When set to False
, neither the data nor the independent
data will be coerced from their original data type, and the user will be
responsible to arrange for the calculation and return value from the model
function to be allow a proper and accurate conversion to a “float64” ndarray.
See also Types of Data to Use for Fitting for general advise and recommendations on types of data to use when fitting data.
Saving and Loading Models¶
It is sometimes desirable to save a Model
for later use outside of
the code used to define the model. Lmfit provides a save_model()
function that will save a Model
to a file. There is also a
companion load_model()
function that can read this file and
reconstruct a Model
from it.
Saving a model turns out to be somewhat challenging. The main issue is that
Python is not normally able to serialize a function (such as the model
function making up the heart of the Model) in a way that can be
reconstructed into a callable Python object. The dill
package can
sometimes serialize functions, but with the limitation that it can be used
only in the same version of Python. In addition, class methods used as
model functions will not retain the rest of the class attributes and
methods, and so may not be usable. With all those warnings, it should be
emphasized that if you are willing to save or reuse the definition of the
model function as Python code, then saving the Parameters and rest of the
components that make up a model presents no problem.
If the dill
package is installed, the model function will also be saved
using it. But because saving the model function is not always reliable,
saving a model will always save the name of the model function. The
load_model()
takes an optional funcdefs
argument that can
contain a dictionary of function definitions with the function names as
keys and function objects as values. If one of the dictionary keys matches
the saved name, the corresponding function object will be used as the model
function. If it is not found by name, and if dill
was used to save
the model, and if dill
is available at run-time, the dill
-encoded
function will try to be used. Note that this approach will generally allow
you to save a model that can be used by another installation of the
same version of Python, but may not work across Python versions. For preserving
fits for extended periods of time (say, archiving for documentation of
scientific results), we strongly encourage you to save the full Python code
used for the model function and fit process.
- save_model(model, fname)¶
Save a Model to a file.
- load_model(fname, funcdefs=None)¶
Load a saved Model from a file.
As a simple example, one can save a model as:
# <examples/doc_model_savemodel.py>
import numpy as np
from lmfit.model import Model, save_model
def mysine(x, amp, freq, shift):
return amp * np.sin(x*freq + shift)
sinemodel = Model(mysine)
pars = sinemodel.make_params(amp=1, freq=0.25, shift=0)
save_model(sinemodel, 'sinemodel.sav')
# <end examples/doc_model_savemodel.py>
To load that later, one might do:
# <examples/doc_model_loadmodel.py>
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
from lmfit.model import load_model
if not os.path.exists('sinemodel.sav'):
os.system(f"{sys.executable} doc_model_savemodel.py")
def mysine(x, amp, freq, shift):
return amp * np.sin(x*freq + shift)
data = np.loadtxt('sinedata.dat')
x = data[:, 0]
y = data[:, 1]
model = load_model('sinemodel.sav', funcdefs={'mysine': mysine})
params = model.make_params(amp=dict(value=3, min=0),
freq=0.52,
shift=dict(value=0, min=-1, max=1))
result = model.fit(y, params, x=x)
print(result.fit_report())
plt.plot(x, y, 'o')
plt.plot(x, result.best_fit, '-')
plt.show()
# <end examples/doc_model_loadmodel.py>
See also Saving and Loading ModelResults.
The ModelResult
class¶
A ModelResult
(which had been called ModelFit
prior to version
0.9) is the object returned by Model.fit()
. It is a subclass of
Minimizer
, and so contains many of the fit results.
Of course, it knows the Model
and the set of
Parameters
used in the fit, and it has methods to
evaluate the model, to fit the data (or re-fit the data with changes to
the parameters, or fit with different or modified data) and to print out a
report for that fit.
While a Model
encapsulates your model function, it is fairly
abstract and does not contain the parameters or data used in a particular
fit. A ModelResult
does contain parameters and data as well as
methods to alter and re-do fits. Thus the Model
is the idealized
model while the ModelResult
is the messier, more complex (but perhaps
more useful) object that represents a fit with a set of parameters to data
with a model.
A ModelResult
has several attributes holding values for fit
results, and several methods for working with fits. These include
statistics inherited from Minimizer
useful for
comparing different models, including chisqr
, redchi
, aic
,
and bic
.
- class ModelResult(model, params, data=None, weights=None, method='leastsq', fcn_args=None, fcn_kws=None, iter_cb=None, scale_covar=True, nan_policy='raise', calc_covar=True, max_nfev=None, **fit_kws)¶
Result from the Model fit.
This has many attributes and methods for viewing and working with the results of a fit using Model. It inherits from Minimizer, so that it can be used to modify and re-run the fit for the Model.
- Parameters:
model (Model) – Model to use.
params (Parameters) – Parameters with initial values for model.
data (array_like, optional) – Data to be modeled.
weights (array_like, optional) – Weights to multiply
(data-model)
for fit residual.method (str, optional) – Name of minimization method to use (default is ‘leastsq’).
fcn_args (sequence, optional) – Positional arguments to send to model function.
fcn_dict (dict, optional) – Keyword arguments to send to model function.
iter_cb (callable, optional) – Function to call on each iteration of fit.
scale_covar (bool, optional) – Whether to scale covariance matrix for uncertainty evaluation.
nan_policy ({'raise', 'propagate', 'omit'}, optional) – What to do when encountering NaNs when fitting Model.
calc_covar (bool, optional) – Whether to calculate the covariance matrix (default is True) for solvers other than ‘leastsq’ and ‘least_squares’. Requires the
numdifftools
package to be installed.max_nfev (int or None, optional) – Maximum number of function evaluations (default is None). The default value depends on the fitting method.
**fit_kws (optional) – Keyword arguments to send to minimization routine.
ModelResult
methods¶
- ModelResult.eval(params=None, **kwargs)¶
Evaluate model function.
- Parameters:
params (Parameters, optional) – Parameters to use.
**kwargs (optional) – Options to send to Model.eval().
- Returns:
Array or value for the evaluated model.
- Return type:
numpy.ndarray, float, int, or complex
- ModelResult.eval_components(params=None, **kwargs)¶
Evaluate each component of a composite model function.
- Parameters:
params (Parameters, optional) – Parameters, defaults to ModelResult.params.
**kwargs (optional) – Keyword arguments to pass to model function.
- Returns:
Keys are prefixes of component models, and values are the estimated model value for each component of the model.
- Return type:
- ModelResult.fit(data=None, params=None, weights=None, method=None, nan_policy=None, **kwargs)¶
Re-perform fit for a Model, given data and params.
- Parameters:
data (array_like, optional) – Data to be modeled.
params (Parameters, optional) – Parameters with initial values for model.
weights (array_like, optional) – Weights to multiply
(data-model)
for fit residual.method (str, optional) – Name of minimization method to use (default is ‘leastsq’).
nan_policy ({'raise', 'propagate', 'omit'}, optional) – What to do when encountering NaNs when fitting Model.
**kwargs (optional) – Keyword arguments to send to minimization routine.
- ModelResult.fit_report(modelpars=None, show_correl=True, min_correl=0.1, sort_pars=False, correl_mode='list')¶
Return a printable fit report.
The report contains fit statistics and best-fit values with uncertainties and correlations.
- Parameters:
modelpars (Parameters, optional) – Known Model Parameters.
show_correl (bool, optional) – Whether to show list of sorted correlations (default is True).
min_correl (float, optional) – Smallest correlation in absolute value to show (default is 0.1).
sort_pars (callable, optional) – Whether to show parameter names sorted in alphanumerical order (default is False). If False, then the parameters will be listed in the order as they were added to the Parameters dictionary. If callable, then this (one argument) function is used to extract a comparison key from each list element.
correl_mode ({'list', table'} str, optional) – Mode for how to show correlations. Can be either ‘list’ (default) to show a sorted (if
sort_pars
is True) list of correlation values, or ‘table’ to show a complete, formatted table of correlations.
- Returns:
Multi-line text of fit report.
- Return type:
- ModelResult.summary()¶
Return a dictionary with statistics and attributes of a ModelResult.
- Returns:
Dictionary of statistics and many attributes from a ModelResult.
- Return type:
Notes
values for data arrays are not included.
The result summary dictionary will include the following entries:
model
,method
,ndata
,nvarys
,nfree
,chisqr
,redchi
,aic
,bic
,rsquared
,nfev
,max_nfev
,aborted
,errorbars
,success
,message
,lmdif_message
,ier
,nan_policy
,scale_covar
,calc_covar
,ci_out
,col_deriv
,flatchain
,call_kws
,var_names
,user_options
,kws
,init_values
,best_values
, andparams
.where ‘params’ is a list of parameter “states”: tuples with entries of
(name, value, vary, expr, min, max, brute_step, stderr, correl, init_value, user_data)
.3. The result will include only plain Python objects, and so should be easily serializable with JSON or similar tools.
- ModelResult.conf_interval(**kwargs)¶
Calculate the confidence intervals for the variable parameters.
Confidence intervals are calculated using the
confidence.conf_interval()
function and keyword arguments (**kwargs) are passed to that function. The result is stored in theci_out
attribute so that it can be accessed without recalculating them.
- ModelResult.ci_report(with_offset=True, ndigits=5, **kwargs)¶
Return a formatted text report of the confidence intervals.
- Parameters:
- Returns:
Text of formatted report on confidence intervals.
- Return type:
- ModelResult.eval_uncertainty(params=None, sigma=1, dscale=0.01, **kwargs)¶
Evaluate the uncertainty of the model function.
This can be used to give confidence bands for the model from the uncertainties in the best-fit parameters.
- Parameters:
params (Parameters, optional) – Parameters, defaults to ModelResult.params.
sigma (float, optional) – Confidence level, i.e. how many sigma (default is 1).
dscale (float, optional) – Scale for derivative steps (default is 0.01).
**kwargs (optional) – Values of options, independent variables, etcetera.
- Returns:
Uncertainty at each value of the model.
- Return type:
Notes
This is based on the excellent and clear example from https://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html#confidence-and-prediction-intervals, which references the original work of: J. Wolberg, Data Analysis Using the Method of Least Squares, 2006, Springer
The value of sigma is number of sigma values, and is converted to a probability. Values of 1, 2, or 3 give probabilities of 0.6827, 0.9545, and 0.9973, respectively. If the sigma value is < 1, it is interpreted as the probability itself. That is,
sigma=1
andsigma=0.6827
will give the same results, within precision errors.The derivatives are calculated by stepping each Parameter from its best value to to +/- stderr*dscale, where dscale can be passed in and defaults to 0.01.
Sets attributes of dely for the uncertainty of the model (which will be the same as the array returned by this method) and dely_comps, a dictionary of dely for each component.
Sets the attribute of dely_predicted for the ‘predicted interval’, the sigma-scaled quadrature sum of the uncertainty interval dely and reduced chi-square. This should give an idea of the expected range in the data.
Examples
>>> out = model.fit(data, params, x=x) >>> dely = out.eval_uncertainty(x=x) >>> plt.plot(x, data) >>> plt.plot(x, out.best_fit) >>> plt.fill_between(x, out.best_fit-dely, ... out.best_fit+dely, color='#888888')
- ModelResult.plot(datafmt='o', fitfmt='-', initfmt='--', xlabel=None, ylabel=None, yerr=None, numpoints=None, fig=None, data_kws=None, fit_kws=None, init_kws=None, ax_res_kws=None, ax_fit_kws=None, fig_kws=None, show_init=False, parse_complex='abs', title=None)¶
Plot the fit results and residuals using matplotlib.
The method will produce a matplotlib figure (if package available) with both results of the fit and the residuals plotted. If the fit model included weights, errorbars will also be plotted. To show the initial conditions for the fit, pass the argument
show_init=True
.- Parameters:
datafmt (str, optional) – Matplotlib format string for data points.
fitfmt (str, optional) – Matplotlib format string for fitted curve.
initfmt (str, optional) – Matplotlib format string for initial conditions for the fit.
xlabel (str, optional) – Matplotlib format string for labeling the x-axis.
ylabel (str, optional) – Matplotlib format string for labeling the y-axis.
yerr (numpy.ndarray, optional) – Array of uncertainties for data array.
numpoints (int, optional) – If provided, the final and initial fit curves are evaluated not only at data points, but refined to contain numpoints points in total.
fig (matplotlib.figure.Figure, optional) – The figure to plot on. The default is None, which means use the current pyplot figure or create one if there is none.
data_kws (dict, optional) – Keyword arguments passed to the plot function for data points.
fit_kws (dict, optional) – Keyword arguments passed to the plot function for fitted curve.
init_kws (dict, optional) – Keyword arguments passed to the plot function for the initial conditions of the fit.
ax_res_kws (dict, optional) – Keyword arguments for the axes for the residuals plot.
ax_fit_kws (dict, optional) – Keyword arguments for the axes for the fit plot.
fig_kws (dict, optional) – Keyword arguments for a new figure, if a new one is created.
show_init (bool, optional) – Whether to show the initial conditions for the fit (default is False).
parse_complex ({'abs', 'real', 'imag', 'angle'}, optional) – How to reduce complex data for plotting. Options are one of: ‘abs’ (default), ‘real’, ‘imag’, or ‘angle’, which correspond to the NumPy functions with the same name.
title (str, optional) – Matplotlib format string for figure title.
- Return type:
See also
ModelResult.plot_fit
Plot the fit results using matplotlib.
ModelResult.plot_residuals
Plot the fit residuals using matplotlib.
Notes
The method combines ModelResult.plot_fit and ModelResult.plot_residuals.
If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is not specified and the fit includes weights, yerr set to
1/self.weights
.If model returns complex data, yerr is treated the same way that weights are in this case.
If fig is None then matplotlib.pyplot.figure(**fig_kws) is called, otherwise fig_kws is ignored.
- ModelResult.plot_fit(ax=None, datafmt='o', fitfmt='-', initfmt='--', xlabel=None, ylabel=None, yerr=None, numpoints=None, data_kws=None, fit_kws=None, init_kws=None, ax_kws=None, show_init=False, parse_complex='abs', title=None)¶
Plot the fit results using matplotlib, if available.
The plot will include the data points, the initial fit curve (optional, with
show_init=True
), and the best-fit curve. If the fit model included weights or if yerr is specified, errorbars will also be plotted.- Parameters:
ax (matplotlib.axes.Axes, optional) – The axes to plot on. The default in None, which means use the current pyplot axis or create one if there is none.
datafmt (str, optional) – Matplotlib format string for data points.
fitfmt (str, optional) – Matplotlib format string for fitted curve.
initfmt (str, optional) – Matplotlib format string for initial conditions for the fit.
xlabel (str, optional) – Matplotlib format string for labeling the x-axis.
ylabel (str, optional) – Matplotlib format string for labeling the y-axis.
yerr (numpy.ndarray, optional) – Array of uncertainties for data array.
numpoints (int, optional) – If provided, the final and initial fit curves are evaluated not only at data points, but refined to contain numpoints points in total.
data_kws (dict, optional) – Keyword arguments passed to the plot function for data points.
fit_kws (dict, optional) – Keyword arguments passed to the plot function for fitted curve.
init_kws (dict, optional) – Keyword arguments passed to the plot function for the initial conditions of the fit.
ax_kws (dict, optional) – Keyword arguments for a new axis, if a new one is created.
show_init (bool, optional) – Whether to show the initial conditions for the fit (default is False).
parse_complex ({'abs', 'real', 'imag', 'angle'}, optional) – How to reduce complex data for plotting. Options are one of: ‘abs’ (default), ‘real’, ‘imag’, or ‘angle’, which correspond to the NumPy functions with the same name.
title (str, optional) – Matplotlib format string for figure title.
- Return type:
See also
ModelResult.plot_residuals
Plot the fit residuals using matplotlib.
ModelResult.plot
Plot the fit results and residuals using matplotlib.
Notes
For details about plot format strings and keyword arguments see documentation of matplotlib.axes.Axes.plot.
If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is not specified and the fit includes weights, yerr set to
1/self.weights
.If model returns complex data, yerr is treated the same way that weights are in this case.
If ax is None then matplotlib.pyplot.gca(**ax_kws) is called.
- ModelResult.plot_residuals(ax=None, datafmt='o', yerr=None, data_kws=None, fit_kws=None, ax_kws=None, parse_complex='abs', title=None)¶
Plot the fit residuals using matplotlib, if available.
If yerr is supplied or if the model included weights, errorbars will also be plotted.
- Parameters:
ax (matplotlib.axes.Axes, optional) – The axes to plot on. The default in None, which means use the current pyplot axis or create one if there is none.
datafmt (str, optional) – Matplotlib format string for data points.
yerr (numpy.ndarray, optional) – Array of uncertainties for data array.
data_kws (dict, optional) – Keyword arguments passed to the plot function for data points.
fit_kws (dict, optional) – Keyword arguments passed to the plot function for fitted curve.
ax_kws (dict, optional) – Keyword arguments for a new axis, if a new one is created.
parse_complex ({'abs', 'real', 'imag', 'angle'}, optional) – How to reduce complex data for plotting. Options are one of: ‘abs’ (default), ‘real’, ‘imag’, or ‘angle’, which correspond to the NumPy functions with the same name.
title (str, optional) – Matplotlib format string for figure title.
- Return type:
See also
ModelResult.plot_fit
Plot the fit results using matplotlib.
ModelResult.plot
Plot the fit results and residuals using matplotlib.
Notes
For details about plot format strings and keyword arguments see documentation of matplotlib.axes.Axes.plot.
If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is not specified and the fit includes weights, yerr set to
1/self.weights
.If model returns complex data, yerr is treated the same way that weights are in this case.
If ax is None then matplotlib.pyplot.gca(**ax_kws) is called.
- ModelResult.iter_cb()¶
Optional callable function, to be called at each fit iteration. This must take take arguments of
(params, iter, resid, *args, **kws)
, whereparams
will have the current parameter values,iter
the iteration,resid
the current residual array, and*args
and**kws
as passed to the objective function. See Using a Iteration Callback Function.
- ModelResult.jacfcn()¶
Optional callable function, to be called to calculate Jacobian array.
ModelResult
attributes¶
A ModelResult
will take all of the attributes of
MinimizerResult
, and several more. Here, we arrange them into
categories.
Parameters and Variables¶
- best_values¶
Dictionary with parameter names as keys, and best-fit values as values.
- init_params¶
Initial parameters, as passed to
Model.fit()
.
- init_values¶
Dictionary with parameter names as keys, and initial values as values.
- init_vals¶
list of values for the variable parameters.
- params¶
Parameters used in fit; will contain the best-fit values and uncertainties if available.
- uvars¶
Dictionary of
uncertainties
ufloats from Parameters.
Fit Arrays and Model¶
- best_fit¶
numpy.ndarray result of model function, evaluated at provided independent variables and with best-fit parameters.
- covar¶
numpy.ndarray (square) covariance matrix returned from fit.
- data¶
numpy.ndarray of data to compare to model.
- dely¶
numpy.ndarray of estimated uncertainties or confidence interval in the
y
values of the model fromModelResult.eval_uncertainty()
(see Calculating uncertainties in the model function).
- dely_comps¶
a dictionary of estimated uncertainties in the
y
values of the model components, fromModelResult.eval_uncertainty()
(see Calculating uncertainties in the model function).
- dely_predicted¶
numpy.ndarray of estimated expected uncertainties in the data or predicted interval in the
y
values of the model fromModelResult.eval_uncertainty()
(see Calculating uncertainties in the model function).
- init_fit¶
numpy.ndarray result of model function, evaluated at provided independent variables and with initial parameters.
- residual¶
numpy.ndarray for residual.
- weights¶
numpy.ndarray (or
None
) of weighting values to be used in fit. If notNone
, it will be used as a multiplicative factor of the residual array, so thatweights*(data - fit)
is minimized in the least-squares sense.
Fit Status¶
- aborted¶
Whether the fit was aborted.
- errorbars¶
Boolean for whether error bars were estimated by fit.
- flatchain¶
A
pandas.DataFrame
view of the sampling chain if theemcee
method is uses.
- ier¶
Integer returned code from scipy.optimize.leastsq.
- lmdif_message¶
String message returned from scipy.optimize.leastsq.
- message¶
String message returned from
minimize()
.
- method¶
String naming fitting method for
minimize()
.
- call_kws¶
Dict of keyword arguments actually send to underlying solver with
minimize()
.
- scale_covar¶
Boolean flag for whether to automatically scale covariance matrix.
- userargs¶
positional arguments passed to
Model.fit()
, a tuple of (y
,weights
)
- userkws¶
keyword arguments passed to
Model.fit()
, a dict, which will have independent data arrays such asx
.
Fit Statistics¶
- aic¶
Floating point best-fit Akaike Information Criterion statistic (see MinimizerResult – the optimization result).
- bic¶
Floating point best-fit Bayesian Information Criterion statistic (see MinimizerResult – the optimization result).
- chisqr¶
Floating point best-fit chi-square statistic (see MinimizerResult – the optimization result).
- ci_out¶
Confidence interval data (see Calculation of confidence intervals) or
None
if the confidence intervals have not been calculated.
- ndata¶
Integer number of data points.
- nfev¶
Integer number of function evaluations used for fit.
- nfree¶
Integer number of free parameters in fit.
- nvarys¶
Integer number of independent, freely varying variables in fit.
- redchi¶
Floating point reduced chi-square statistic (see MinimizerResult – the optimization result).
- rsquared¶
Floating point \(R^2\) statistic, defined for data \(y\) and best-fit model \(f\) as
- success¶
Boolean value of whether fit succeeded. This is an optimistic view of success, meaning that the method finished without error.
Calculating uncertainties in the model function¶
We return to the first example above and ask not only for the
uncertainties in the fitted parameters but for the range of values that
those uncertainties mean for the model function itself. We can use the
ModelResult.eval_uncertainty()
method of the model result object to
evaluate the uncertainty in the model with a specified level for
\(\sigma\).
That is, adding:
dely = result.eval_uncertainty(sigma=3)
plt.fill_between(x, result.best_fit-dely, result.best_fit+dely, color="#ABABAB",
label=r'3-$\sigma$ uncertainty band')
to the example fit to the Gaussian at the beginning of this chapter will give 3-\(\sigma\) bands for the best-fit Gaussian, and produce the figure below.
Added in version 1.0.4.
If the model is a composite built from multiple components, the
ModelResult.eval_uncertainty()
method will evaluate the uncertainty of
both the full model (often the sum of multiple components) as well as the
uncertainty in each component. The uncertainty of the full model will be held in
result.dely
, and the uncertainties for each component will be held in the dictionary
result.dely_comps
, with keys that are the component prefixes.
An example script shows how the uncertainties in components of a composite model can be calculated and used:
# <examples/doc_model_uncertainty2.py>
import matplotlib.pyplot as plt
import numpy as np
from lmfit.models import ExponentialModel, GaussianModel
dat = np.loadtxt('NIST_Gauss2.dat')
x = dat[:, 1]
y = dat[:, 0]
model = (GaussianModel(prefix='g1_') +
GaussianModel(prefix='g2_') +
ExponentialModel(prefix='bkg_'))
params = model.make_params(bkg_amplitude=100, bkg_decay=80,
g1_amplitude=3000,
g1_center=100,
g1_sigma=10,
g2_amplitude=3000,
g2_center=150,
g2_sigma=10)
result = model.fit(y, params, x=x)
print(result.fit_report(min_correl=0.5))
comps = result.eval_components(x=x)
dely = result.eval_uncertainty(sigma=3)
fig, axes = plt.subplots(2, 2, figsize=(12.8, 9.6))
axes[0][0].plot(x, y, 'o', color='#99002299', markersize=3, label='data')
axes[0][0].plot(x, result.best_fit, '-', label='best fit')
axes[0][0].plot(x, result.init_fit, '--', label='initial fit')
axes[0][0].set_title('data, initial fit, and best-fit')
axes[0][0].legend()
axes[0][1].plot(x, y, 'o', color='#99002299', markersize=3, label='data')
axes[0][1].plot(x, result.best_fit, '-', label='best fit')
axes[0][1].fill_between(x, result.best_fit-dely, result.best_fit+dely,
color="#8A8A8A", label=r'3-$\sigma$ band')
axes[0][1].set_title('data, best-fit, and uncertainty band')
axes[0][1].legend()
axes[1][0].plot(x, result.best_fit, '-', label=r'best fit, 3-$\sigma$ band')
axes[1][0].fill_between(x,
result.best_fit-result.dely,
result.best_fit+result.dely,
color="#8A8A8A")
axes[1][0].plot(x, comps['bkg_'], label=r'background, 3-$\sigma$ band')
axes[1][0].fill_between(x,
comps['bkg_']-result.dely_comps['bkg_'],
comps['bkg_']+result.dely_comps['bkg_'],
color="#8A8A8A")
axes[1][0].plot(x, comps['g1_'], label=r'Gaussian #1, 3-$\sigma$ band')
axes[1][0].fill_between(x,
comps['g1_']-result.dely_comps['g1_'],
comps['g1_']+result.dely_comps['g1_'],
color="#8A8A8A")
axes[1][0].plot(x, comps['g2_'], label=r'Gaussian #2, 3-$\sigma$ band')
axes[1][0].fill_between(x,
comps['g2_']-result.dely_comps['g2_'],
comps['g2_']+result.dely_comps['g2_'],
color="#8A8A8A")
axes[1][0].set_title('model components with uncertainty bands')
axes[1][0].legend()
axes[1][1].plot(x, result.best_fit, '-', label='best fit')
axes[1][1].plot(x, 10*result.dely, label=r'3-$\sigma$ total (x10)')
axes[1][1].plot(x, 10*result.dely_comps['bkg_'], label=r'3-$\sigma$ background (x10)')
axes[1][1].plot(x, 10*result.dely_comps['g1_'], label=r'3-$\sigma$ Gaussian #1 (x10)')
axes[1][1].plot(x, 10*result.dely_comps['g2_'], label=r'3-$\sigma$ Gaussian #2 (x10)')
axes[1][1].set_title('uncertainties for model components')
axes[1][1].legend()
plt.show()
# <end examples/doc_model_uncertainty2.py>
[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential, prefix='bkg_'))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 55
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
g1_amplitude: 4257.77399 +/- 42.3838008 (1.00%) (init = 3000)
g1_center: 107.030957 +/- 0.15006868 (0.14%) (init = 100)
g1_sigma: 16.6725789 +/- 0.16048222 (0.96%) (init = 10)
g2_amplitude: 2493.41715 +/- 36.1696228 (1.45%) (init = 3000)
g2_center: 153.270104 +/- 0.19466723 (0.13%) (init = 150)
g2_sigma: 13.8069453 +/- 0.18680099 (1.35%) (init = 10)
bkg_amplitude: 99.0183280 +/- 0.53748639 (0.54%) (init = 100)
bkg_decay: 90.9508824 +/- 1.10310769 (1.21%) (init = 80)
g1_fwhm: 39.2609222 +/- 0.37790675 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880228 +/- 0.59217122 (0.58%) == '0.3989423*g1_amplitude/max(1e-15, g1_sigma)'
g2_fwhm: 32.5128710 +/- 0.43988270 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455936 +/- 0.61721901 (0.86%) == '0.3989423*g2_amplitude/max(1e-15, g2_sigma)'
[[Correlations]] (unreported correlations are < 0.500)
C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(bkg_amplitude, bkg_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6520
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(g1_amplitude, bkg_decay) = -0.5074
Added in version 1.2.3.
In addition to the “confidence interval” result.dely
, the
ModelResult.eval_uncertainty()
method will also estimate the “predicted
interval” – the expected range for data matching the model, and hold this in
result.dely_predicted
. An example script showing both confidence and
predicted intervals is shown below:
# <examples/doc_model_uncertainty_pred.py>
# Here we reproduce the results presented in Section 5.1 of the
# nonlinear regression lecture by Julien Chiquet
# https://jchiquet.github.io/MAP566/docs/regression/map566-lecture-nonlinear-regression.html
import matplotlib.pyplot as plt
import numpy as np
from lmfit import Model
# data from the R base package datasets for the waiting time between eruptions
# and the duration of the eruption for the Old Faithful geyser in Yellowstone
# National Park, Wyoming, USA.
tlen = np.array([3.6, 1.8, 3.333, 2.283, 4.533, 2.883, 4.7, 3.6, 1.95, 4.35,
1.833, 3.917, 4.2, 1.75, 4.7, 2.167, 1.75, 4.8, 1.6, 4.25,
1.8, 1.75, 3.45, 3.067, 4.533, 3.6, 1.967, 4.083, 3.85, 4.433,
4.3, 4.467, 3.367, 4.033, 3.833, 2.017, 1.867, 4.833, 1.833,
4.783, 4.35, 1.883, 4.567, 1.75, 4.533, 3.317, 3.833, 2.1,
4.633, 2, 4.8, 4.716, 1.833, 4.833, 1.733, 4.883, 3.717,
1.667, 4.567, 4.317, 2.233, 4.5, 1.75, 4.8, 1.817, 4.4, 4.167,
4.7, 2.067, 4.7, 4.033, 1.967, 4.5, 4, 1.983, 5.067, 2.017,
4.567, 3.883, 3.6, 4.133, 4.333, 4.1, 2.633, 4.067, 4.933,
3.95, 4.517, 2.167, 4, 2.2, 4.333, 1.867, 4.817, 1.833, 4.3,
4.667, 3.75, 1.867, 4.9, 2.483, 4.367, 2.1, 4.5, 4.05, 1.867,
4.7, 1.783, 4.85, 3.683, 4.733, 2.3, 4.9, 4.417, 1.7, 4.633,
2.317, 4.6, 1.817, 4.417, 2.617, 4.067, 4.25, 1.967, 4.6,
3.767, 1.917, 4.5, 2.267, 4.65, 1.867, 4.167, 2.8, 4.333,
1.833, 4.383, 1.883, 4.933, 2.033, 3.733, 4.233, 2.233, 4.533,
4.817, 4.333, 1.983, 4.633, 2.017, 5.1, 1.8, 5.033, 4, 2.4,
4.6, 3.567, 4, 4.5, 4.083, 1.8, 3.967, 2.2, 4.15, 2, 3.833,
3.5, 4.583, 2.367, 5, 1.933, 4.617, 1.917, 2.083, 4.583,
3.333, 4.167, 4.333, 4.5, 2.417, 4, 4.167, 1.883, 4.583, 4.25,
3.767, 2.033, 4.433, 4.083, 1.833, 4.417, 2.183, 4.8, 1.833,
4.8, 4.1, 3.966, 4.233, 3.5, 4.366, 2.25, 4.667, 2.1, 4.35,
4.133, 1.867, 4.6, 1.783, 4.367, 3.85, 1.933, 4.5, 2.383, 4.7,
1.867, 3.833, 3.417, 4.233, 2.4, 4.8, 2, 4.15, 1.867, 4.267,
1.75, 4.483, 4, 4.117, 4.083, 4.267, 3.917, 4.55, 4.083,
2.417, 4.183, 2.217, 4.45, 1.883, 1.85, 4.283, 3.95, 2.333,
4.15, 2.35, 4.933, 2.9, 4.583, 3.833, 2.083, 4.367, 2.133,
4.35, 2.2, 4.45, 3.567, 4.5, 4.15, 3.817, 3.917, 4.45, 2,
4.283, 4.767, 4.533, 1.85, 4.25, 1.983, 2.25, 4.75, 4.117,
2.15, 4.417, 1.817, 4.467])
delay = np.array([79, 54, 74, 62, 85, 55, 88, 85, 51, 85, 54, 84, 78, 47, 83,
52, 62, 84, 52, 79, 51, 47, 78, 69, 74, 83, 55, 76, 78, 79,
73, 77, 66, 80, 74, 52, 48, 80, 59, 90, 80, 58, 84, 58, 73,
83, 64, 53, 82, 59, 75, 90, 54, 80, 54, 83, 71, 64, 77, 81,
59, 84, 48, 82, 60, 92, 78, 78, 65, 73, 82, 56, 79, 71, 62,
76, 60, 78, 76, 83, 75, 82, 70, 65, 73, 88, 76, 80, 48, 86,
60, 90, 50, 78, 63, 72, 84, 75, 51, 82, 62, 88, 49, 83, 81,
47, 84, 52, 86, 81, 75, 59, 89, 79, 59, 81, 50, 85, 59, 87,
53, 69, 77, 56, 88, 81, 45, 82, 55, 90, 45, 83, 56, 89, 46,
82, 51, 86, 53, 79, 81, 60, 82, 77, 76, 59, 80, 49, 96, 53,
77, 77, 65, 81, 71, 70, 81, 93, 53, 89, 45, 86, 58, 78, 66,
76, 63, 88, 52, 93, 49, 57, 77, 68, 81, 81, 73, 50, 85, 74,
55, 77, 83, 83, 51, 78, 84, 46, 83, 55, 81, 57, 76, 84, 77,
81, 87, 77, 51, 78, 60, 82, 91, 53, 78, 46, 77, 84, 49, 83,
71, 80, 49, 75, 64, 76, 53, 94, 55, 76, 50, 82, 54, 75, 78,
79, 78, 78, 70, 79, 70, 54, 86, 50, 90, 54, 54, 77, 79, 64,
75, 47, 86, 63, 85, 82, 57, 82, 67, 74, 54, 83, 73, 73, 88,
80, 71, 83, 56, 79, 78, 84, 58, 83, 43, 60, 75, 81, 46, 90,
46, 74])
# model with a modified logistic function + offset
def logistic_func(t, amp, off, tau, gamma):
return (amp-off)/(1+np.exp(-gamma*(t-tau))) + off
# set up model and initial parameter values
mod = Model(logistic_func)
pars = mod.make_params(amp=90, gamma=2, tau=2, off=50)
# run fit, and print report
result = mod.fit(delay, pars, t=tlen)
print(result.fit_report())
# make finely spaced grid of duration values, extending past data range
tfine = np.linspace(1, 6, 101)
yfine = result.eval(t=tfine)
# now calculate uncertainty interval and predicted interval for sigma=2, 95% level
efine = result.eval_uncertainty(t=tfine, sigma=2)
pfine = result.dely_predicted
plt.plot(tlen, delay, '.', label='observations')
plt.plot(tlen, result.best_fit, 'o', label='best fit')
plt.plot(tfine, yfine, '-', label='fine grid model')
plt.fill_between(tfine, yfine-efine, yfine+efine,
color="#c0c0c0", label=r'$2\sigma$ confidence interval')
plt.fill_between(tfine, yfine-pfine, yfine+pfine,
color="#d0d0a060", label=r'$2\sigma$ predicted interval')
plt.xlabel('duration (minutes)')
plt.ylabel('delay to next eruption (minutes)')
plt.legend()
plt.show()
# <end examples/doc_model_uncertainty_pred.py>
[[Model]]
Model(logistic_func)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 60
# data points = 272
# variables = 4
chi-square = 8469.42359
reduced chi-square = 31.6023268
Akaike info crit = 943.249061
Bayesian info crit = 957.672270
R-squared = 0.83090615
[[Variables]]
amp: 82.4660404 +/- 0.99770817 (1.21%) (init = 90)
off: 51.3218514 +/- 1.83125060 (3.57%) (init = 50)
tau: 3.05525924 +/- 0.11068420 (3.62%) (init = 2)
gamma: 2.25386376 +/- 0.43544396 (19.32%) (init = 2)
[[Correlations]] (unreported correlations are < 0.100)
C(off, gamma) = +0.8634
C(amp, gamma) = -0.7706
C(off, tau) = +0.7277
C(amp, off) = -0.5381
C(tau, gamma) = +0.4699
Using uncertainties in the fitted parameters for post-fit calculations¶
Added in version 1.2.2.
As with the previous section, after a fit is complete, you may want to do some further calculations with the resulting Parameter values. Since these Parameters will have not only best-fit values but also usually have uncertainties, it is desirable for subsequent calculations to be able to propagate those uncertainties to any resulting calculated value. In addition, it is common for Parameters to have finite - and sometimes large - correlations which should be taken into account in such calculations.
The ModelResult.uvars
will be a dictionary with keys for all variable
Parameters and values that are uvalues
from the uncertainties package.
When used in mathematical calculations with basic Python operators or numpy
functions, these uvalues
will automatically propagate their uncertainties
to the resulting calculation, and taking into account the full covariance
matrix describing the correlation between values.
This readily allows “derived Parameters” to be evaluated just after the fit.
In fact, it might be useful to have a Model always do such a calculation just
after the fit. The Model.post_fit()
method allows exactly that: you can
overwrite this otherwise empty method for any Model. It takes one argument:
the ModelResult
instance just after the actual fit has run (and before
Model.fit()
returns) and can be used to add Parameters or do other
post-fit processing.
The following example script shows two different methods for calculating a centroid
value for two peaks, either by doing the calculation directly after the fit
with the result.uvars
or by capturing this in a Model.post_fit()
method that would be run for all instances of that model. It also demonstrates
that taking correlations between Parameters into account when performing
calculations can have a noticeable influence on the resulting uncertainties.
# <examples/doc_parameters_uvars.py>
import numpy as np
from lmfit.models import ExponentialModel, GaussianModel
dat = np.loadtxt('NIST_Gauss2.dat')
x = dat[:, 1]
y = dat[:, 0]
bkg = ExponentialModel(prefix='exp_')
gauss1 = GaussianModel(prefix='g1_')
gauss2 = GaussianModel(prefix='g2_')
mod = gauss1 + gauss2 + bkg
pars = mod.make_params(g1_center=105, g1_amplitude=4000, g1_sigma={'value': 15, 'min': 0},
g2_center=155, g2_amplitude=2000, g2_sigma={'value': 15, 'min': 0},
exp_amplitude=100, exp_decay=100)
out = mod.fit(y, pars, x=x)
print(out.fit_report(min_correl=0.5))
# Area and Centroid of the combined peaks
# option 1: from output uncertainties values
uvar_g1amp = out.uvars['g1_amplitude']
uvar_g2amp = out.uvars['g2_amplitude']
uvar_g1cen = out.uvars['g1_center']
uvar_g2cen = out.uvars['g2_center']
print("### Peak Area: ")
print(f"Peak1: {out.params['g1_amplitude'].value:.5f}+/-{out.params['g1_amplitude'].stderr:.5f}")
print(f"Peak2: {out.params['g2_amplitude'].value:.5f}+/-{out.params['g2_amplitude'].stderr:.5f}")
print(f"Sum : {(uvar_g1amp + uvar_g2amp):.5f}")
area_quadrature = np.sqrt(out.params['g1_amplitude'].stderr**2 + out.params['g2_amplitude'].stderr**2)
print(f"Correlation of peak1 and peak2 areas: {out.params['g1_amplitude'].correl['g2_amplitude']:.5f}")
print(f"Stderr Quadrature sum for peak1 and peak2 area2: {area_quadrature:.5f}")
print(f"Stderr of peak1 + peak2 area, with correlation: {(uvar_g1amp+uvar_g2amp).std_dev:.5f}")
print("### Peak Centroid: ")
centroid = (uvar_g1amp * uvar_g1cen + uvar_g2amp * uvar_g2cen) / (uvar_g1amp + uvar_g2amp)
print(f"Peak Centroid: {centroid:.5f}")
# option 2: define corresponding variables after fit
def post_fit(result):
"example post fit function"
result.params.add('post_area', expr='g1_amplitude + g2_amplitude')
result.params.add('post_centroid', expr='(g1_amplitude*g1_center+ g2_amplitude*g2_center)/(g1_amplitude + g2_amplitude)')
mod.post_fit = post_fit
out = mod.fit(y, pars, x=x)
print(out.fit_report(min_correl=0.5))
# option 3: define corresponding variables before fit
pars.add('area', expr='g1_amplitude + g2_amplitude')
pars.add('centroid', expr='(g1_amplitude*g1_center+ g2_amplitude*g2_center)/(g1_amplitude + g2_amplitude)')
out = mod.fit(y, pars, x=x)
print(out.fit_report(min_correl=0.5))
# <end examples/doc_parameters_uvars.py>
[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential, prefix='exp_'))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 46
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
g1_amplitude: 4257.77390 +/- 42.3837959 (1.00%) (init = 4000)
g1_center: 107.030957 +/- 0.15006877 (0.14%) (init = 105)
g1_sigma: 16.6725785 +/- 0.16048211 (0.96%) (init = 15)
g2_amplitude: 2493.41716 +/- 36.1697062 (1.45%) (init = 2000)
g2_center: 153.270104 +/- 0.19466774 (0.13%) (init = 155)
g2_sigma: 13.8069453 +/- 0.18680153 (1.35%) (init = 15)
exp_amplitude: 99.0183277 +/- 0.53748650 (0.54%) (init = 100)
exp_decay: 90.9508838 +/- 1.10310767 (1.21%) (init = 100)
g1_fwhm: 39.2609214 +/- 0.37790648 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880229 +/- 0.59217051 (0.58%) == '0.3989423*g1_amplitude/max(1e-15, g1_sigma)'
g2_fwhm: 32.5128709 +/- 0.43988398 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455939 +/- 0.61721985 (0.86%) == '0.3989423*g2_amplitude/max(1e-15, g2_sigma)'
[[Correlations]] (unreported correlations are < 0.500)
C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(exp_amplitude, exp_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6520
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(g1_amplitude, exp_decay) = -0.5074
### Peak Area:
Peak1: 4257.77390+/-42.38380
Peak2: 2493.41716+/-36.16971
Sum : 6751.19106+/-46.50802
Correlation of peak1 and peak2 areas: -0.30712
Stderr Quadrature sum for peak1 and peak2 area2: 55.71924
Stderr of peak1 + peak2 area, with correlation: 46.50802
### Peak Centroid:
Peak Centroid: 124.10846+/-0.14074
[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential, prefix='exp_'))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 46
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
g1_amplitude: 4257.77390 +/- 42.3837959 (1.00%) (init = 4000)
g1_center: 107.030957 +/- 0.15006877 (0.14%) (init = 105)
g1_sigma: 16.6725785 +/- 0.16048211 (0.96%) (init = 15)
g2_amplitude: 2493.41716 +/- 36.1697062 (1.45%) (init = 2000)
g2_center: 153.270104 +/- 0.19466774 (0.13%) (init = 155)
g2_sigma: 13.8069453 +/- 0.18680153 (1.35%) (init = 15)
exp_amplitude: 99.0183277 +/- 0.53748650 (0.54%) (init = 100)
exp_decay: 90.9508838 +/- 1.10310767 (1.21%) (init = 100)
g1_fwhm: 39.2609214 +/- 0.37790648 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880229 +/- 0.59217051 (0.58%) == '0.3989423*g1_amplitude/max(1e-15, g1_sigma)'
g2_fwhm: 32.5128709 +/- 0.43988398 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455939 +/- 0.61721985 (0.86%) == '0.3989423*g2_amplitude/max(1e-15, g2_sigma)'
post_area: 6751.19106 +/- 46.5080249 (0.69%) == 'g1_amplitude + g2_amplitude'
post_centroid: 124.108459 +/- 0.14074482 (0.11%) == '(g1_amplitude*g1_center+ g2_amplitude*g2_center)/(g1_amplitude + g2_amplitude)'
[[Correlations]] (unreported correlations are < 0.500)
C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(exp_amplitude, exp_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6520
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(g1_amplitude, exp_decay) = -0.5074
[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential, prefix='exp_'))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 46
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
g1_amplitude: 4257.77390 +/- 42.3837959 (1.00%) (init = 4000)
g1_center: 107.030957 +/- 0.15006877 (0.14%) (init = 105)
g1_sigma: 16.6725785 +/- 0.16048211 (0.96%) (init = 15)
g2_amplitude: 2493.41716 +/- 36.1697062 (1.45%) (init = 2000)
g2_center: 153.270104 +/- 0.19466774 (0.13%) (init = 155)
g2_sigma: 13.8069453 +/- 0.18680153 (1.35%) (init = 15)
exp_amplitude: 99.0183277 +/- 0.53748650 (0.54%) (init = 100)
exp_decay: 90.9508838 +/- 1.10310767 (1.21%) (init = 100)
g1_fwhm: 39.2609214 +/- 0.37790648 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880229 +/- 0.59217051 (0.58%) == '0.3989423*g1_amplitude/max(1e-15, g1_sigma)'
g2_fwhm: 32.5128709 +/- 0.43988398 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455939 +/- 0.61721985 (0.86%) == '0.3989423*g2_amplitude/max(1e-15, g2_sigma)'
area: 6751.19106 +/- 46.5080249 (0.69%) == 'g1_amplitude + g2_amplitude'
centroid: 124.108459 +/- 0.14074482 (0.11%) == '(g1_amplitude*g1_center+ g2_amplitude*g2_center)/(g1_amplitude + g2_amplitude)'
post_area: 6751.19106 +/- 46.5080249 (0.69%) == 'g1_amplitude + g2_amplitude'
post_centroid: 124.108459 +/- 0.14074482 (0.11%) == '(g1_amplitude*g1_center+ g2_amplitude*g2_center)/(g1_amplitude + g2_amplitude)'
[[Correlations]] (unreported correlations are < 0.500)
C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(exp_amplitude, exp_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6520
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(g1_amplitude, exp_decay) = -0.5074
Note that the Model.post_fit()
does not need to be limited to this
use case of adding derived Parameters.
Saving and Loading ModelResults¶
As with saving models (see section Saving and Loading Models), it is
sometimes desirable to save a ModelResult
, either for later use or
to organize and compare different fit results. Lmfit provides a
save_modelresult()
function that will save a ModelResult
to
a file. There is also a companion load_modelresult()
function that
can read this file and reconstruct a ModelResult
from it.
As discussed in section Saving and Loading Models, there are challenges to
saving model functions that may make it difficult to restore a saved a
ModelResult
in a way that can be used to perform a fit.
Use of the optional funcdefs
argument is generally the most
reliable way to ensure that a loaded ModelResult
can be used to
evaluate the model function or redo the fit.
- save_modelresult(modelresult, fname)¶
Save a ModelResult to a file.
- Parameters:
modelresult (ModelResult) – ModelResult to be saved.
fname (str) – Name of file for saved ModelResult.
- load_modelresult(fname, funcdefs=None)¶
Load a saved ModelResult from a file.
- Parameters:
- Returns:
ModelResult object loaded from file.
- Return type:
An example of saving a ModelResult
is:
# <examples/doc_model_savemodelresult.py>
import numpy as np
from lmfit.model import save_modelresult
from lmfit.models import GaussianModel
data = np.loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]
gmodel = GaussianModel()
result = gmodel.fit(y, x=x, amplitude=5, center=5, sigma=1)
save_modelresult(result, 'gauss_modelresult.sav')
print(result.fit_report())
# <end examples/doc_model_savemodelresult.py>
To load that later, one might do:
# <examples/doc_model_loadmodelresult.py>
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
from lmfit.model import load_modelresult
if not os.path.exists('gauss_modelresult.sav'):
os.system(f"{sys.executable} doc_model_savemodelresult.py")
data = np.loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]
result = load_modelresult('gauss_modelresult.sav')
print(result.fit_report())
plt.plot(x, y, 'o')
plt.plot(x, result.best_fit, '-')
plt.show()
# <end examples/doc_model_loadmodelresult.py>
Composite Models : adding (or multiplying) Models¶
One of the more interesting features of the Model
class is that
Models can be added together or combined with basic algebraic operations
(add, subtract, multiply, and divide) to give a composite model. The
composite model will have parameters from each of the component models,
with all parameters being available to influence the whole model. This
ability to combine models will become even more useful in the next chapter,
when pre-built subclasses of Model
are discussed. For now, we’ll
consider a simple example, and build a model of a Gaussian plus a line, as
to model a peak with a background. For such a simple problem, we could just
build a model that included both components:
def gaussian_plus_line(x, amp, cen, wid, slope, intercept):
"""line + 1-d gaussian"""
gauss = (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))
line = slope*x + intercept
return gauss + line
and use that with:
mod = Model(gaussian_plus_line)
But we already had a function for a gaussian function, and maybe we’ll discover that a linear background isn’t sufficient which would mean the model function would have to be changed.
Instead, lmfit allows models to be combined into a CompositeModel
.
As an alternative to including a linear background in our model function,
we could define a linear function:
def line(x, slope, intercept):
"""a line"""
return slope*x + intercept
and build a composite model with just:
mod = Model(gaussian) + Model(line)
This model has parameters for both component models, and can be used as:
# <examples/doc_model_two_components.py>
import matplotlib.pyplot as plt
from numpy import exp, loadtxt, pi, sqrt
from lmfit import Model
data = loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1] + 0.25*x - 1.0
def gaussian(x, amp, cen, wid):
"""1-d gaussian: gaussian(x, amp, cen, wid)"""
return (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))
def line(x, slope, intercept):
"""a line"""
return slope*x + intercept
mod = Model(gaussian) + Model(line)
pars = mod.make_params(amp=5, cen=5, wid={'value': 1, 'min': 0},
slope=0, intercept=1)
result = mod.fit(y, pars, x=x)
print(result.fit_report())
plt.plot(x, y, 'o')
plt.plot(x, result.init_fit, '--', label='initial fit')
plt.plot(x, result.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_model_two_components.py>
which prints out the results:
[[Model]]
(Model(gaussian) + Model(line))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 55
# data points = 101
# variables = 5
chi-square = 2.57855517
reduced chi-square = 0.02685995
Akaike info crit = -360.457020
Bayesian info crit = -347.381417
R-squared = 0.99194643
[[Variables]]
amp: 8.45930976 +/- 0.12414531 (1.47%) (init = 5)
cen: 5.65547889 +/- 0.00917673 (0.16%) (init = 5)
wid: 0.67545513 +/- 0.00991697 (1.47%) (init = 1)
slope: 0.26484403 +/- 0.00574892 (2.17%) (init = 0)
intercept: -0.96860189 +/- 0.03352202 (3.46%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(slope, intercept) = -0.7954
C(amp, wid) = +0.6664
C(amp, intercept) = -0.2216
C(amp, slope) = -0.1692
C(cen, slope) = -0.1618
C(wid, intercept) = -0.1477
C(cen, intercept) = +0.1287
C(wid, slope) = -0.1127
and shows the plot on the left.
On the left, data is shown in blue dots, the total fit is shown in solid green line, and the initial fit is shown as a orange dashed line. The figure on the right shows again the data in blue dots, the Gaussian component as a orange dashed line and the linear component as a green dashed line. It is created using the following code:
comps = result.eval_components()
plt.plot(x, y, 'o')
plt.plot(x, comps['gaussian'], '--', label='Gaussian component')
plt.plot(x, comps['line'], '--', label='Line component')
The components were generated after the fit using the
ModelResult.eval_components()
method of the result
, which returns
a dictionary of the components, using keys of the model name
(or prefix
if that is set). This will use the parameter values in
result.params
and the independent variables (x
) used during the
fit. Note that while the ModelResult
held in result
does store the
best parameters and the best estimate of the model in result.best_fit
,
the original model and parameters in pars
are left unaltered.
You can apply this composite model to other data sets, or evaluate the
model at other values of x
. You may want to do this to give a finer or
coarser spacing of data point, or to extrapolate the model outside the
fitting range. This can be done with:
xwide = linspace(-5, 25, 3001)
predicted = mod.eval(result.params, x=xwide)
In this example, the argument names for the model functions do not overlap.
If they had, the prefix
argument to Model
would have allowed
us to identify which parameter went with which component model. As we will
see in the next chapter, using composite models with the built-in models
provides a simple way to build up complex models.
- class CompositeModel(left, right, op[, **kws])¶
Combine two models (left and right) with binary operator (op).
Normally, one does not have to explicitly create a CompositeModel, but can use normal Python operators
+
,-
,*
, and/
to combine components as in:>>> mod = Model(fcn1) + Model(fcn2) * Model(fcn3)
- Parameters:
Notes
The two models can use different independent variables.
Note that when using built-in Python binary operators, a
CompositeModel
will automatically be constructed for you. That is,
doing:
mod = Model(fcn1) + Model(fcn2) * Model(fcn3)
will create a CompositeModel
. Here, left
will be Model(fcn1)
,
op
will be operator.add()
, and right
will be another
CompositeModel that has a left
attribute of Model(fcn2)
, an op
of
operator.mul()
, and a right
of Model(fcn3)
.
To use a binary operator other than +
, -
, *
, or /
you can
explicitly create a CompositeModel
with the appropriate binary
operator. For example, to convolve two models, you could define a simple
convolution function, perhaps as:
import numpy as np
def convolve(dat, kernel):
"""simple convolution of two arrays"""
npts = min(len(dat), len(kernel))
pad = np.ones(npts)
tmp = np.concatenate((pad*dat[0], dat, pad*dat[-1]))
out = np.convolve(tmp, kernel, mode='valid')
noff = int((len(out) - npts) / 2)
return (out[noff:])[:npts]
which extends the data in both directions so that the convolving kernel function gives a valid result over the data range. Because this function takes two array arguments and returns an array, it can be used as the binary operator. A full script using this technique is here:
# <examples/doc_model_composite.py>
import matplotlib.pyplot as plt
import numpy as np
from lmfit import CompositeModel, Model
from lmfit.lineshapes import gaussian, step
# create data from broadened step
x = np.linspace(0, 10, 201)
y = step(x, amplitude=12.5, center=4.5, sigma=0.88, form='erf')
np.random.seed(0)
y = y + np.random.normal(scale=0.35, size=x.size)
def jump(x, mid):
"""Heaviside step function."""
o = np.zeros(x.size)
imid = max(np.where(x <= mid)[0])
o[imid:] = 1.0
return o
def convolve(arr, kernel):
"""Simple convolution of two arrays."""
npts = min(arr.size, kernel.size)
pad = np.ones(npts)
tmp = np.concatenate((pad*arr[0], arr, pad*arr[-1]))
out = np.convolve(tmp, kernel, mode='valid')
noff = int((len(out) - npts) / 2)
return out[noff:noff+npts]
# create Composite Model using the custom convolution operator
mod = CompositeModel(Model(jump), Model(gaussian), convolve)
# create parameters for model. Note that 'mid' and 'center' will be highly
# correlated. Since 'mid' is used as an integer index, it will be very
# hard to fit, so we fix its value
pars = mod.make_params(amplitude=dict(value=1, min=0),
center=3.5,
sigma=dict(value=1.5, min=0),
mid=dict(value=4, vary=False))
# fit this model to data array y
result = mod.fit(y, params=pars, x=x)
print(result.fit_report())
# generate components
comps = result.eval_components(x=x)
# plot results
fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))
axes[0].plot(x, y, 'bo')
axes[0].plot(x, result.init_fit, 'k--', label='initial fit')
axes[0].plot(x, result.best_fit, 'r-', label='best fit')
axes[0].legend()
axes[1].plot(x, y, 'bo')
axes[1].plot(x, 10*comps['jump'], 'k--', label='Jump component')
axes[1].plot(x, 10*comps['gaussian'], 'r-', label='Gaussian component')
axes[1].legend()
plt.show()
# <end examples/doc_model_composite.py>
which prints out the results:
[[Model]]
(Model(jump) <function convolve at 0x17de228e0> Model(gaussian))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 33
# data points = 201
# variables = 3
chi-square = 24.7562335
reduced chi-square = 0.12503148
Akaike info crit = -414.939746
Bayesian info crit = -405.029832
R-squared = 0.99632577
[[Variables]]
mid: 4 (fixed)
amplitude: 0.62508458 +/- 0.00189732 (0.30%) (init = 1)
center: 5.50853669 +/- 0.00973231 (0.18%) (init = 3.5)
sigma: 0.59576097 +/- 0.01348579 (2.26%) (init = 1.5)
[[Correlations]] (unreported correlations are < 0.100)
C(amplitude, center) = +0.3292
C(amplitude, sigma) = +0.2680
and shows the plots:
Using composite models with built-in or custom operators allows you to build complex models from testable sub-components.