Machine learning meets pK a

We present a small molecule pK a prediction tool entirely written in Python. It predicts the macroscopic pK a value and is trained on a literature compilation of monoprotic compounds. Different machine learning models were tested and random forest performed best given a five-fold cross-validation (mean absolute error=0.682, root mean squared error=1.032, correlation coefficient r 2 =0.82). We test our model on two external validation sets, where our model performs comparable to Marvin and is better than a recently published open source model. Our Python tool and all data is freely available at https://github.com/czodrowskilab/Machine-learning-meets-pKa.


Amendments from Version 1 Introduction
The acid-base dissociation constant (pK a ) of a drug has a far-reaching influence on pharmacokinetics by altering the solubility, membrane permeability and protein binding affinity of the drug. Several publications summarize these findings in a very comprehensive manner [1][2][3][4][5][6][7] . An accurate estimation of pK a values is therefore of utmost importance for successful drug design. Several (commercial and non-commercial) tools and approaches for small molecule pK a prediction are available: MoKa 8 uses molecular interaction fields, whereas ACD/Labs Percepta Classic 9 , Marvin 10 and Epik 11 make use of the Hammet-Taft equation. By means of Jaguar 12 , a quantum-mechanical approach to pK a prediction becomes possible. Recently, the usage of neural nets for pK a prediction became prominent [13][14][15] . In particular, the publication by Williams et al. 15 makes use of a publicly available data set provided by the application DataWarrior 16 and provides a freely available pK a prediction tool called OPERA.
As this article is part of a Python collection issue, we provide a pK a prediction method entirely written in Python 17 and make it available open source (including all data). Our tool computes the macroscopic pK a value for a monoprotic compound. Our model solely differentiates between a base and acid based on the predicted pK a value; i.e., we do not offer separate models for acids and bases. In addition to pK a data from DataWarrior 16 , we also employ pK a data from ChEMBL 18 . As external validation sets, we use compound data provided by Novartis 19 and a manually curated data set compiled from literature 20-24 , which are not part of the training data.

Data set preparation
A ChEMBL 18 web search was performed to find all assays containing pK a measurement data. The following restrictions were made: it must be a physicochemical assay, the measurements must be taken from scientific literature, the assay must be in "small-molecule physicochemical format" and the organism taxonomy must be set to "N/A". This results in a list of 1140 ChEMBL assays downloaded as CSV file.
Using a Python script, the CSV file was read in and processed further, extracting all additional information required from an internally hosted copy of the ChEMBL database via SQL. Only pK a measurements, i.e. ChEMBL activities, were taken into account that were specified as exact ("standard_ relation" equals "=") and for which one of the following names was specified as "standard_type": "pka", "pka value", "pka1", "pka2", "pka3" or "pka4" (case-insensitive). Measured values for which the molecular structure was not available were also sorted out. The resulting 8111 pK a measured values were saved as SDF file.
A flat file from DataWarrior 16 named "pKaInWater.dwar" was used in addition. In this case, the file was converted to an SDF file only and contains 7911 entries with valid molecular structures.
These data sets were concatenated for the purpose of this study and preprocessed as follows: • Removal of all salts from molecules • Removal of molecules containing nitro groups, Boron, Selenium or Silicium • Filtering by Lipinski's rule of five (one violation allowed) • Keeping only pK a data points between 2 and 12 • Tautomer standardization of all molecules • Protonation of all molecules at pH 7.4 • Keeping only monoprotic molecules regarding the specified pK a range • Combination of data points from duplicated structures while removing outliers All steps up to filtering out all pK a values outside the range of 2 to 12 were performed with Python and RDKit 25 . The QuacPac 26 Tautomers tool from OpenEye was used for tautomer standardization and setting the protonation state to pH 7.4. The Marvin 10 tool from ChemAxon was used to filter out the multiprotic compounds. It predicted the pK a values of all molecules in the range 2 to 12 and then retained only those molecules where Marvin did not predict more than one pK a in that range.
The removal of the outliers is performed in two steps. First, before combining multiple measurements for the same molecules, all entries where the pK a predicted by ChemAxon's Tool Marvin differs from the experimental value by more than four log units are removed. All molecules were then combined on the basis of the canonical isomeric SMILES. In the second step, when combining several measured values of a molecule, all those values that deviate from the mean value by more than two standard deviations are removed.
The remaining values are arithmetically averaged.
After all, 5994 unique monoprotic molecules with experimental pK a values remained. The distribution of pK a values is given in Figure 1.  Figure 2(A) and Figure 2(B) Learning First, to simplify cross-validation, a class "CVRegressor" was defined, which can serve as a wrapper for any regressor implementing the Scikit-Learn 27 interface. This class simplifies crossvalidation itself, training and prediction with the cross-validated model. Next, 196 of the 200 available RDKit descriptors ("MaxPartialCharge", "MinPartialCharge", "MaxAbsPartial-Charge" and "MinAbsPartialCharge" were not used because they are computed as "NaN" for many molecules), and a 4096bit long MorganFeature fingerprint with radius 3 were calculated for the training data set. Random forest (RF), support vector regression (SVR, two configurations), multilayer perceptron (MLP, three configurations) and XGradientBoost (XGB)  were used as basic regressors. Unless otherwise specified, the Scikit-Learn default parameters (version 0.22.1) were used. For the RF model, only the number of trees was increased to 1000. For SVR the size of the cache was increased to 4096 megabytes in the first configuration, but this only increases the training speed and has no influence on the model quality.
In the second configuration the parameter "gamma" was additionally set to the value "auto". For MLP in the first configuration the number of hidden layers was increased to two and the number of neurons per layer to 500. In the second configuration, early stopping was additionally activated, where 10% of the training data was separated as validation data. If the error of the validation data did not improve by more than 0.001 over ten training epochs, the training is stopped early to avoid overtraining. In the third configuration three hidden layers with a size of 250 neurons each were used with early stopping still activated. For XGB the default parameters of the used library XGBoost (version 0.90) 29 were applied. The training of RF, MLP and XGB was parallelized on 12 CPU cores and the generation of the folds for cross-validation as well as the training itself were random seeded to a value of 24 to ensure reproducibility. This resulted in a total of seven different machine learning configurations.
Six different descriptor/fingerprint combinations were also tested. First only the RDKit descriptors, followed by only the fingerprints and finally both combined. Additionally, all three combinations were tested again in a standardized form (z-transformed). As a result, 42 combinations of regressor and training data configurations were compared.
A 5-fold cross-validation was performed for all configurations, which were evaluated using the MAE, RMSE and the empirical coefficient of determination (r 2 ). After training was completed for all configurations, two external test data sets, which do not contain training data, were used to re-validate each trained cross-validated model. Here, MAE, RMSE, and r 2 were also calculated as statistical quality measures. To ensure that no training data was contained in the test data sets, the conical isomeric SMILES were checked for matches in both training and test data sets and corresponding hits were removed from the test data sets.

Implementation
The Operation Prediction pipeline. To use the data preparation pipeline the repository folder hast to be entered and the created conda environment must be activated. Additionally the Marvin 10 commandline tool cxcalc and the QUACPAC 26 commandline tool tautomers have to be added to the PATH variable.
Also the environment variables OE_LICENSE (containing the path to the OpenEye license file) and JAVA_HOME (referring to the Java installation folder, which is needed for cxcalc) have to be set.
After preparation a small usage information can be displayed with bash run_pipeline.sh -h. It should be noted that this model was built for monoprotic structures regarding a pH range of 2 to 12. If the model is used with multiprotic structures, the predicted values will probably not be correct.

Different experimental methods
One crucial point in the field of pK a measurements (and its usage for pK a predictions) was linked to the different experimental methods 25,30 . Based on the Novartis set, the correlation between capillary electrophoresis and potentiometric measurements (for 15 data points) was convincing enough (mean absolute error (MAE)=0.202, root mean squared error (RMSE)=0.264, correlation coefficient r 2 =0.981) for us to combine pK a measurements from these different experimental methods (see Figure 3).
We also compared the pK a values of 187 monoprotic molecules contained in both the ChEMBL and DataWarrior data sets. Due to the missing annotation, it remained unclear if different experimental methods were used or multiple measurements with the same experimental method have been performed (or a mixture of both). Either way, this comparison was an additional proof-of-concept that the ChEMBL and Data-Warrior pK a data sources can be combined after careful curation. The aforementioned intersection is given in Figure 4. The correlation coefficient between the annotated pK a values for these two data sets r 2 was 0.949, the MAE was 0.275, and the RMSE was 0.576.
The compounds for which the pK a values between the different sources deviate by more than two units are as follows: -Hydralazine: -pK a (DataWarrior) 5.075 -pK a (ChEMBL25) 7.3 -.Edaravone: -pK a (DataWarrior) 11.3 -pK a (ChEMBL25) 6.9 -.Trifluoromethanesulfonamide: -pK a (DataWarrior) 6.33 -pK a (ChEMBL25) 9.7 Since the annotation about the experimental settings is not given in the DataWarrior file, we can only hypothesize that these differences are due to the different experimental settings.

Machine Learning
The statistics for a five-fold cross-validation are given in Table 1. In terms of the mean absolute error, a random forest with scaled MorganFeatures (radius=3) and descriptors gave the best performing model (MAE=0.682, RMSE=1.032, r 2 =0.82).
For the two external test sets (see Table 2), a random forest with FeatureMorgan (radius=3) gave the best model -.    This showed that our model had a slightly better performance than Marvin for the LiteratureCompilation, but Marvin performed better for the Novartis dataset. For both data sets, our models 17 had a better predictive performance than the OPERA tool. Since some molecules had to be omitted for prediction with OPERA due to none or multiple predicted pK a values, no consistent significance test could be performed for all comparisons.

Discussion and conclusions
The developed model offers the possibility to predict pK a values for monoprotic molecules with good accuracy. However, since the model has been trained exclusively with monoprotic molecules, only monoprotic molecules can be predicted properly. In this respect the model is limited. Nevertheless, the results show that the performance for monoprotic molecules can compete with the performance of existing prediction tools. The good performance of Marvin on the Novartis set is interesting to note: the RMSE was almost 0.4 units better than our top performing model. This could be because Marvin's training set is much larger than our own training set. This provides a better foundation for the training of the Marvin model. In contrast, Marvin performed slightly worse than our top model on the LiteratureCompilation. The OPERA tool performed significantly worse than our model on both external test sets. We assume that the addition of 2470 ChEMBL pK a -datapoints to our training set which were not part of the OPERA training set led to this drop in predictive performance. In addition, the pre-processing of the data was performed differently by OPERA in comparison to our pre-processing procedure.
As next step for the enhancement and improvement of our pK a prediction model 17 , we are currently expanding it to multiprotic molecules. We are also investigating the impact of different neural net architectures and types (such as graph neural nets) and the development of individual models for acids and bases. From a chemistry perspective, an analysis of pK a effects of different functional groups (e.g. by means of matched molecular pairs analysis) is an on-going effort for a future publication.
The following data sets were used in this study: • AvLiLuMoVe.sdf -Manually combined literature pK a data.

Software availability
The source code is available at https://github.com/czodrowskilab/ Machine-learning-meets-pKa. Baltruschat and Czodrowski report on the development of a Python-based tool for the prediction of pk values. The tool is made available open source and will certainly be useful to the scientific community.
Could the authors please comment on the structural relationship between the training and the test data (how far apart are these datasets, e.g. measured by the distribution of pairwise Tanimoto coefficients based on molecular fingerprints)?
Could the authors please confirm statistical significance of the key statements made in the first paragraph of the Results section, starting with "This shows that our model slightly outcompetes..."?
In the Abstract, please name the open source model that was tested as part of the comparative performance analysis.
It is not clear from the manuscript text whether capillary electrophoresis and potentiometric measurements were the only two experimental methods considered in this work (or, for the Novartis dataset only).
Please revise the sentence starting with "We also compare the overlap...".
"make use of a" should be replaced by "makes use of a".
Please explain where DataWarrior's data originates from.
Please comment on the outliers observed in Figure 2.
Please explain the procedure that was followed to ensure that there was no overlap between the training data and the external validation sets (the Novartis dataset in particular).
"isomeric SMILES": is it canonical as well?
The part starting with "For the two external test sets..." is difficult to read. Could the authors find a more simple way to describe this and enable readers to compare the individual values more easily?

Is the work clearly and accurately presented and does it cite the current literature? Yes
Is the study design appropriate and is the work technically sound? Yes

If applicable, is the statistical analysis and its interpretation appropriate? Partly
Are all the source data underlying the results available to ensure full reproducibility? Yes

Are the conclusions drawn adequately supported by the results? Yes
No competing interests were disclosed. It is not clear from the manuscript text whether capillary electrophoresis and potentiometric measurements were the only two experimental methods considered in this work (or, for the Novartis dataset only). We mention this already in the section "Different experimental methods" :"Due to the missing annotation, it remained unclear if different experimental methods were used or multiple measurements with the same experimental method have been performed (or a mixture of both)".
Please revise the sentence starting with "We also compare the overlap...". Changed "We also compared the overlap of the filtered (see next section) ChEMBL and DataWarrior data sets, 187 monoprotic molecules could be identified in both sources." To "We also compared the pKa values of 187 monoprotic molecules contained in both the ChEMBL and sets." DataWarrior data "make use of a" should be replaced by "makes use of a". Done Please explain where DataWarrior's data originates from. We cannot make any statement about the origin, because it is not included in the DataWarrior file.
Please comment on the outliers observed in Figure 2. We include the name of the outliers and the different pKa values originating from different data sources, if they differ by more than two log units.
Please explain the procedure that was followed to ensure that there was no overlap between the training data and the external validation sets (the Novartis dataset in particular). To ensure that no training data was contained in the test data sets, the canonical isomeric SMILES were checked for matches in both training and test data sets and corresponding hits were removed from the test data sets. Chirality was not included in this comparison.
"isomeric SMILES": is it canonical as well? Yes, it is the canonical isomeric SMILES generated by RDKit.
Avoid using "you" in the Implementation section (use "one [can]..." instead). Done The part starting with "For the two external test sets..." is difficult to read. Could the authors find a more simple way to describe this and enable readers to compare the individual values more easily?
We have pulled most of the numbers apart and included bullet points which improves the readability and makes a comparison more feasible.
No competing interests were disclosed.

2.
3. In the discussion section, the developed method should be first discussed before discussing its performance relative to other methods.This was done and is reflected in the new manuscript version.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
No competing interests were disclosed. Competing Interests:

Version 1
Author Response 05 Mar 2020 , TU Dortmund University, Otto-Hahn-Strasse 6, Germany Paul Czodrowski Thanks for the comments, we will take a closer look at the distinction between acidic and basic pKa and the different OPERA models in a follow-up publication.
Currently, a check of the molecules to be predicted with respect to the applicability of the trained model is not implemented, because otherwise the Marvin tool from ChemAxon would also be required for the not implemented, because otherwise the Marvin tool from ChemAxon would also be required for the prediction. With Marvin the classification between mono-and multiprotic molecules is performed. However, we are considering adding an optional validation option for the input structures.
No competing interests were disclosed.

Competing Interests:
Reader Comment 20 Feb 2020 , ILS, USA Kamel Mansouri This is a good attempt to model pKa which is an important parameter to predict. This type of work is also a good addition to the existing free and open-source pKa predictors such as OPERA. The authors clearly state that their tool only works on monoprotic chemicals and doesn't differentiate between acidic and basic pKa. However, in their comparison with other tools that do, the authors fail to make that differentiation. To compare the comparable, the authors should compare their predictions with OPERA predictions separately for acidic and basic pKa also because one of their data sources was DataWarrior (also used for OPERA training and testing) differentiates between the two (acidic Vs basic).
Since the model only works for monoprotic chemicals, is there an applicability domain assessment or some sort of error message that informs the user when they try to use for a multiprotic chemical? if not, it should be added.
Both the results and discussion sections should expanded further and clarify the strengths and limitations of the tool.
I have no competing interests.

Competing Interests:
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias You can publish traditional articles, null/negative results, case reports, data notes and more The peer review process is transparent and collaborative Your article is indexed in PubMed after passing peer review Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com