I have a new Python project I would like to share with the community. Actually, this project isn't so new. I developed an initial version about two years before completing my postdoctoral research, and it has undergone various revisions over the past three years. Having finally made time to give it the clean-up it needed,1 I am excited to share it on GitHub.
Overview
pdLSR
is a library for performing least squares minimization. It attempts to seamlessly incorporate this task in a Pandas-focused workflow. Input data are expected in dataframes, and multiple regressions can be performed using functionality similar to Pandas groupby
. Results are returned as grouped dataframes and include best-fit parameters, statistics, residuals, and more. The results can be easily visualized using seaborn
.
pdLSR
currently utilizes lmfit
, a flexible and powerful library for least squares minimization, which in turn, makes use of scipy.optimize.leastsq
. I began using lmfit
because it is one of the few libraries that supports non-linear least squares regression, which is commonly used in the natural sciences. I also like the flexibility it offers for testing different modeling scenarios and the variety of assessment statistics it provides. However, I found myself writing many for
loops to perform regressions on groups of data and aggregate the resulting output. Simplification of this task was my inspiration for writing pdLSR
.
pdLSR
is related to libraries such as statsmodels
and scikit-learn
that provide linear regression functions that operate on dataframes. However, these libraries don't directly support grouping operations on dataframes.
The aggregation of minimization output parameters that is performed by pdLSR
has many similarities to the R library broom
, which is written by David Robinson and with whom I had an excellent conversation about our two libraries. broom
is more general in its ability to accept input from many minimizers, and I think expanding pdLSR
in this fashion, for compatibility with statsmodels
and scikit-learn
for example, could be useful in the future.
Minimization setup¶
To demonstrate how to use pdLSR
, I will be using some sample scientific data. This data is nuclear magnetic resonance (NMR) data acquired at two different magnetic field strengths (14.1 and 18.8 T) on the DNA-binding region of a transcription factor called GCN4.
There are six amino acids in the enclosed data set (numbered 51 - 56), so using amino acid residue (resi
) and magnetic field (field
) as the groupby
columns means we will need to perform twelve regressions.
data = pd.read_csv('GCN4_twofield.tsv', sep='\t')
! head GCN4_twofield.tsv
We will be analyzing this data to determine the rate of exponential decay as a function of time for every amino acid residue at each of the two magnetic field strengths. The equation looks like this:
$I_{(t)} = I_{(0)} \space e^{(-R * t)}$
where $I_{(0)}$ is the initial intensity, $I_{(t)}$ is the intensity at time $t$, and $R$ is the exponential decay rate. We have $I_{(t)}$ and $t$ from the data above and will use NLS to determine $I_{(0)}$ and $R$.
Regression and prediction¶
Parameters for pdLSR
are entered as a list of dictionaries with the format being similar to that used by lmfit
.
Performing the regression is quite simple--just call the class pdLSR
with input parameters set. The fit
method performs the regression and predict
calculates a best-fit line at higher resolution for plotting.
params = [{'name':'inten',
'value':np.asarray(data.groupby(['resi', 'field']).intensity.max()),
'vary':True},
{'name':'rate',
'value':20.0,
'vary':True}]
minimizer_kwargs = {'params':params,
'method':'leastsq',
'sigma':0.95,
'threads':None}
fit_data = pdLSR.pdLSR(data, exponential_decay,
groupby=['resi', 'field'],
xname='time', yname='intensity',
minimizer='lmfit',
minimizer_kwargs=minimizer_kwargs)
fit_data.fit()
fit_data.predict()
Results¶
From these simple commands, five output tables are created:
- data for the input data, calculated data, and residuals
- results that contains the best-fit parameters and estimation of their error
- stats for statistics related to the regression, such as chi-squared and AIC
- model that contains a best-fit line created by the
predict
method - covar that contains the covariance matrices
Let's take a look at a couple of these tables. The results table contains best-fit parameters, their standard errors, and confidence intervals.
fit_data.results.head(n=4)
The stats table contains the following statistics for each of the regression groups:
- Number of observations (
nobs
) - Number of fit parameters (
npar
) - Degrees of freedom (
dof
) - Chi-squared (
chisqr
) - Reduced chi-squared (
redchi
) - Akaike information criterion (
aic
) - Bayesian information criterion (
bic
)
fit_data.stats.head(n=4)
It is also easy to access the covariance matrix for calculations.
fit_data.covar.loc[(51, 14.1)]
fit_data.pivot_covar().loc[(51, 14.1)].values
Visualization
The results can be visualized in facet plots with Seaborn. To make it easier to view the data, all intensities have been normalized.
plot_data = pd.concat([fit_data.data,
fit_data.model], axis=0).reset_index()
colors = sns.color_palette()
palette = {14.1:colors[0], 18.8:colors[2]}
grid = sns.FacetGrid(plot_data, col='resi', hue='field', palette=palette,
col_wrap=3, size=2.0, aspect=0.75,
sharey=True, despine=True)
grid.map(plt.plot, 'xcalc', 'ycalc', marker='', ls='-', lw=1.0)
grid.map(plt.plot, 'time', 'intensity', marker='o', ms=5, ls='')
grid.set(xticks=np.linspace(0.05, 0.25, 5),
ylim=(-0.1, 1.05))
ax = grid.axes[0]
legend = ax.get_legend_handles_labels()
ax.legend(legend[0][2:], legend[1][2:], loc=0, frameon=True)
f = plt.gcf()
f.set_size_inches(12,8)
f.subplots_adjust(wspace=0.2, hspace=0.25)
Just for fun, here's a bar graph of the decay rates determined from NLS.
plot_data = (fit_data.results
.sort_index(axis=1)
.loc[:,('rate',['value','stderr'])]
)
plot_data.columns = plot_data.columns.droplevel(0)
plot_data.reset_index(inplace=True)
fig = plt.figure()
fig.set_size_inches(7,5)
ax = plt.axes()
palette = [colors[0], colors[2]]
for pos, (field, dat) in enumerate(plot_data.groupby('field')):
_ = dat.plot('resi', 'value', yerr='stderr',
kind='bar', label=field, color=palette[pos],
position=(-pos)+1, ax=ax, width=0.4)
ax.set_ylabel('decay rate (s$^{-1}$)')
ax.set_xlabel('residue')
ax.set_xlim(ax.get_xlim()[0]-0.5, ax.get_xlim()[1])
plt.xticks(rotation=0)
sns.despine()
plt.tight_layout()
Conclusion
Easy right? Using pdLSR
over the past few years has made my Python-based analytical workflows much smoother. Let me know how it works for you if you decide to try it!
If you are interested in trying pdLSR
, you can do so without even installing it. There is a live demo available on GitHub. Click here and navigate to pdLSR --> demo --> pdLSR_demo.ipynb
.
The package can be installed in three different ways:
- Using
conda
withconda install -c mlgill pdlsr
- Using
pip
withpip install pdLSR
- Manually from the GitHub repo
This post was written in a Jupyter notebook, which can be downloaded and viewed statically here.
-
And with a helpful nudge in the form of the excellent data science bootcamp I'm currently attending. Stay tuned for more about that! ↩