Due to their computational efficiency, 2D fingerprints are typically used in similarity-based high-content screening. The interaction of a ligand with its target protein, however, relies on its physicochemical interactions in 3D space. Thus, ligands with different 2D scaffolds can bind to the same protein if these ligands share similar interaction patterns. Molecular fields can represent those interaction profiles. For efficiency, the extrema of those molecular fields, named field points, are used to quantify the ligand similarity in 3D. The calculation of field points involves the evaluation of the interaction energy between the ligand and a small probe shifted on a fine grid representing the molecular surface. These calculations are computationally prohibitive for large datasets of ligands, making field point representations of molecules intractable for high-content screening. Here, we overcome this roadblock by one-shot prediction of field points using generative neural networks based on the molecular structure alone. Field points are predicted by training an SE(3)-Transformer, an equivariant, attention-based graph neural network architecture, on a large set of ligands with field point data. Resulting data demonstrates the feasibility of this approach to precisely generate negative, positive and hydrophobic field points within 0.5 Å of the ground truth for a diverse set of drug-like molecules.
1.Introduction
Similarity based virtual screening often relies on 2D representations of molecular structures. The interaction between ligand and a target protein, however, depends on the strength of physicochemical interactions between the two entities in 3D space. Those interactions are best modeled by molecular interaction fields of a ligand with molecular probes characterizing the interacting protein. Consequently, ligands with different molecular topology but similar molecular interaction fields can bind at the same binding site. In Cheeseright et al (2006) it was suggested that the molecular interaction fields are sufficiently well represented by their extrema points (see figure 1), named field points. In the same publication, a methodology for constructing molecular field points for electrostatic, van der Waals and hydrophobic interaction fields was described. The successful identification of alternative lead compounds with different molecular topology but similar binding properties based on field points was illustrated for a range of different ligands and targets (e.g. in Cheeseright et al (2008, 2009), Low and Vinter (2008)). In this paper, we train an equivariant, attention based, graph neural network for field point prediction. Equivariance is a mathematical property determining the effect of a transformation on a function's input to its output (see definition 1). In the context of deep learning, equivariance properties allow for data efficiency by reflecting problem symmetries. Thus, incorporating equivariance into a model seems to be specifically promising in the area of computational biology and chemoinformatics, as data is scarce and the biochemical processes occur irrespectively of rotational and translational coordinate transforms.
Convolutional neural networks (CNNs) were first applied in Lecun et al (1998) and have since then proved to be impressively successful for a range of applications, such as the analysis of image, video and audio data. CNNs owe this effectiveness to weight sharing, constructed in a way that results in translation equivariance (see e.g. section 1.1 in Gerken et al (2021) for a formal context). Loosely speaking, this equivariance property ensures that applying a convolutional layer to a translated image is equivalent to translating the result of the application of the convolutional layer to the original image.
Inspired by the equivariance properties of CNNs, extensive research effort was devoted to the construction of neural networks satisfying equivariance properties in a more general, group theoretically formalized context (see e.g. Cohen and Welling (2016) for pioneering work). These architectures reduce the number of parameters while maintaining expressivity, by incorporating existing problem-inherent symmetries into the model. The resulting reduction of model complexity leads to increased training efficiency, specifically in higher dimensions. Further advantages of equivariance properties are a more understandable, interpretable and robust response of the network to transformations of the input data.
In the context of chemoinformatics we are concerned with problem symmetries in the three-dimensional space: A molecule is still of the same type, no matter how it is shifted or rotated in the Euclidean space. The strength and dynamics of protein-ligand interactions do not depend on the point of observation. If we rotate our point of view onto the same molecule, our prediction of the molecular interaction fields should be rotated accordingly. Naturally, it seems to be a promising approach to introduce an inductive bias into a neural network that mathematically guarantees such properties. More specifically, we are interested in network architectures that are equivariant w.r.t. the group of rotations and translations in three dimensions, i.e. SE(3). One model architecture satisfying this property is the SE(3)-Transformer (introduced in Fuchs et al (2020)). The SE(3)-Transformer is an attention-based graph neural network with tensorfield-type building blocks (see Thomas et al (2018)). We build on the model suggested in Fuchs et al (2020) and its efficient implementation by NVIDIA (NVIDIA 2022), and applied it to a large database consisting of small molecules and their field points.
In section 2 we provide details on the dataset and the construction of descriptors. In section 3 we describe the model architecture and introduce the loss function used for learning. For the purpose of quantifying the quality of the predictions of our model, we introduce evaluation functions in section 4 and discuss the results. Finally, in section 5 we perform ablation studies to analyze the impact of the individual descriptors.
2.Dataset preparation
The original data set consists of million small molecules of sizes ranging from 6 to 100 atoms. For each molecule the data set contains up to 5 different conformations. The data is artificially generated by third-party software in accordance with the elaborations presented in Cheeseright et al (2006) (see appendix
Molecules are represented by graphs where information between neighboring nodes is exchanged along edges using the SE(3)-Transformer. Two different types of graph topologies have been tested. In one approach, the graph topology is defined by the covalent bonds. In the second approach, the graph topology is based on Euclidean distances, i.e. two nodes (e.g. atoms) are connected via an undirected edge, if their distance is less than a specified threshold.
The atom type is one-hot encoded as a vector of dimension 24. Similarly, the node degree is one-hot encoded as a 4 dimensional vector. For the partial charge, atom size, and Wildman–Crippen logP value we apply a radial basis function expansion. In detail, let be a feature taking values in the range and with
be equidistant support points in the range of values of the corresponding feature. Define
Then the scalar feature is expanded to an m dimensional vector as follows:
For the partial charge we choose , for the atom size and for the Wildman–Crippen logP value .
In total, the features ('node degree', 'atom type', 'partial charge', 'atom size', 'Crippen logP value') add up to a vector of dimension . That means each node in the graph is associated to a feature vector that will serve as input to the model described in the next section.
3.Model and loss function
In the following we are using the same terminology of rotation order and fiber structure as in Fuchs et al (2020) (see Terminology 1 in the appendix). Our model is built on the NVIDIA implementation (NVIDIA 2022) of the SE(3)-Transformer as described in Fuchs et al (2020). In detail, we are using a neural network consisting of 7 'SE3 Attention Blocks' of the following form (see figure 2 and Terminology 2–4):
- ConvSE3 (Tensor field network convolution transforming an arbitrary input fiber to an arbitrary output fiber. In this case used for computing attention key fiber and value fiber with specified number of degrees and channels)
- LinearSE3 (self-interaction of channels within degrees for computing the query fiber)
- AttentionSE3 (attention calculation over all neighboring atoms)
- LinearSE3 (self-interaction of channels within degrees to obtain output fiber)
We apply layer normalization in each 'ConvSE3' component. In total, the network contains about 3.97 million learnable parameters. In the hidden layers we allow rotation orders with 32 channels each, corresponding to the fiber structure ['0': 32; '1': 32; '2': 32]. Recall that scalars are represented by rotation order 0, vectors by 1, and rotation order 2 corresponds to a higher order geometric object of dimension 5. Hence, per node (e.g. atom) of the graph, the attention block calculates a vector of dimension . Subsequent to the 7 attention blocks, we apply a final 'ConvSE3' layer in order to transform to the output fiber structure ['0': 3 ; '1': 3] (network architecture is illustrated in figure 2). Thus, the model predicts 3 scalars and vectors per node (e.g. atom) of the graph (see first image of figure 3). These vectors represent the positions of field points relative to the coordinates of the corresponding atom. The associated scalar per vector corresponds to a weighting of this specific prediction. In order to train these predictions, a suitable loss function was developed. Note that we consider the field point prediction tasks as separate problems for each type of field point (positive, negative, hydrophobic, van der Waals). Thus, a separate model was trained for each type of field point.
Consider a molecule consisting of nodes. For node , denote by the predicted scalars (used to determine the probability weights in the following) and by the predicted vectors (used to point to the field points relative to the atom position in the following), respectively. By applying a softmax function, the scalars determine a probability distribution as follows: For , define
Denote by the coordinate position of the i'th node (i.e. atom). For the purpose of training, we interpret the predicted probabilities and vectors as determining weights and centers of a Gaussian Mixture Model as follows:
where we denote by
the isotropic Gaussian density function in 3 dimensions with mean and standard deviation . For a molecule with field points of a certain type (e.g. hydrophobic), let
denote the coordinate position and field value of the jʹth field point. We define probabilities in proportion to the field values:
Analogously to (1), the probability weights and true field point locations determine a Gaussian Mixture Model as
Note that weighting by field values as in (2) takes into account that larger field values are more relevant in determining the binding properties of the molecule and thus more important to predict correctly.
By construction of q and p (see (1), (3)) we can expect that density q being approximately similar to p will result in reasonable field point predictions. A natural choice as a measure of divergence between two densities is the symmetrized Kullback–Leibler divergence:
Let us denote by and the probabilities and vectors determining the densities q and p respectively (compare (1) and (3)). Since calculating the quantity (4) is analytically intractable and computationally (e.g. via Monte Carlo simulations) demanding, we replace it by the following loss:
Note that intuitively, the equation (5) can be thought of as taking the discrete probability at support points and respectively, as the first argument of the KL divergence in both terms of (4).
Each field point across different molecules should be equally important (proportionate by field value) to be predicted correctly. Hence, we need to scale the loss L2 by the sum of field values to obtain:
During training we observed that a small penalization on the length of prediction vectors is essential for the preservation of locality and training convergence. Moreover, a penalization of large probability weights in the form of a quadratic sum of probability weights was beneficial to cause the model to predict clouds of predictions instead of few high probability vectors, leading to more robustness and better performance. Including both penalization terms, we define the final loss function as
where and β = 10.
The model described so far, results in predictions forming point clouds around the target field point (see second image of figure 3). In order to obtain more localized and precise predictions, we apply a clustering algorithm. We choose an agglomerative clustering as implemented in Pedregosa et al (2011) with a linkage distance threshold of 1 Å to obtain the final field point predictions (third image of figure 3). The probability weights of all predictions contained in a cluster are summed up and a prediction is made at the weighted average position, if the sum exceeds a certain threshold . More specifically, for , let be the k'th cluster found by the clustering algorithm, consisting of the probabilities and coordinates :
Then the k'th field point prediction suggested by the model is:
We decide to make a prediction at the point if the associated cluster probability exceeds the threshold , meaning
Assuming that clusters were predicted, we denote by
the tuple of predicted cluster locations.
4.Results
The data set is split randomly into training set () and test set () on a molecular basis (all conformations of one molecule will be in the same set). We train on a GPU of 24 GB memory (GeForce RTX 3090) with a batch size of 50 (accumulated batch size is ) for about 3 days (see Hinz (2023) to reproduce results). In order to quantify the quality of our field point predictions, let us introduce the true positive rate and the weighted true positive rate (i.e. weighted sensitivity) as evaluation functions: For a maximum allowed distance r > 0 between predicted and ground-truth field point and cluster location as in equation (8), define
To put the quantity in equation (10) into context, we also calculate the positive predictive value (precision):
Note that (equation (9)) corresponds to the proportion of ground truth field points that were predicted by the model (in the sense that at least one prediction is within the distance r > 0). The quantity (equation (10)) is defined similarly, however weights the correct prediction of each ground truth field point in proportion to its field value. The measure might be more relevant, as it is indeed more important to predict field points of high value correctly. The precision corresponds to the proportion of predictions that are closer than distance r > 0 to a ground truth field point. In the following analysis, we chose the cluster probability threshold c = 0.005 for predicting a field point at a cluster center (compare (7)) for all field point types. Results for ranging cutoff values can be found in figure 10.
The results for each field point type and maximal distance r from ground truth position are shown for the test set in table 1 and for the training set in table 4. Note that larger values for and are considered as higher prediction quality, with 1 being the optimum. Per construction, all three evaluation quantities are monotonically increasing with r. We observe some overfitting to the data (specifically for r = 0.5 Å), that becomes less pronounced for larger values of r. For all field points we achieve a precision of at least for r = 0.5 Å and at least for r = 1 Å, meaning that only few predictions are far off from a ground truth field point. Note that allowing a maximum error of 0.5 Å is a rather strict criterion compared, for example, to docking predictions with an accepted tolerance of typically 2 Å. Also for the potential use of our method for pharmacophore-based similarity screening a tolerance of 0.5 Å is well within the scope of typical tolerance ranges of 1–2 Å.
Table 1.Results on test set. Model trained on descriptors 'partial charge', 'atom size', 'logP', 'node degree', 'atom type', graph topology based on 7 Å atom distances.
All descriptors, 7 Å graph topology | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.848 | 0.896 | 0.909 | 0.919 | 0.837 | 0.874 | 0.885 | 0.896 | |
0.768 | 0.820 | 0.833 | 0.843 | 0.775 | 0.811 | 0.822 | 0.833 | |
0.755 | 0.879 | 0.915 | 0.934 | 0.825 | 0.897 | 0.920 | 0.933 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.960 | 0.970 | 0.983 | 0.991 | 0.820 | 0.848 | 0.863 | 0.884 | |
0.948 | 0.960 | 0.972 | 0.983 | 0.792 | 0.818 | 0.833 | 0.854 | |
0.906 | 0.955 | 0.975 | 0.987 | 0.878 | 0.930 | 0.954 | 0.970 |
The model performs better for positive electrostatic field points than for negative electrostatic field points in terms of precision, specifically for r = 0.5 Å . However, with a weighted true positive rate of 0.896 (negative field point) and 0.873 (positive field point) at r = 1 Å on the test set, the model seems to capture the majority of high value field points for both field point types.
Figure 4 shows three examples for (a) negative and (b) positive field points highlighting the overall excellent performance of the model to reproduce the ground truth points. Not surprisingly, field points that are predominantly caused by single nearby polar atoms with highly negative or positive partial charge are precisely reproduced (e.g. field points n1, n9, n11, p3, p15, p16, etc). However, the model also learns field points that originate from the electrostatic potential from multiple, sometimes topologically distant, polar atoms (e.g. n2, n4–6, p1, p2, p8, p20, etc). Negative and positive field points originate not only from atoms that can undergo hydrogen bonding such as oxygen and nitrogen, but also halogen atoms (n3) or electropositive hydrogen atoms (e.g. p18–21). This demonstrates that the network model not only learns trivial projections from isolated atoms but the topological and spatial context of the molecules. The latter is modeled in our network by defining graph edges with a maximum Euclidean distance of 7 Å. Whereas the dominant field points (e.g. n2, n4–7, n9) are all well-reproduced, sometimes weak field points are not predicted (e.g. n8, n10) or their position is shifted (e.g. p1–2). In the case of n10, the inherent flexibility of the nearby hydroxyl group strongly influences the position of this field point. Thus, field points related to such flexible hydroxyl groups (and similar functional groups) will have variable positions dependent on the generated rotation state of the functional group. This variability within the training set, makes it for the model very difficult to learn coherent rules of field point generation.
The best performance of the model was achieved for hydrophobic field points (figure 4(c)). For r = 1 Å on the test set, the precision () as well as the sensitivity (, ) indicate that almost all ground truth field points were predicted by the model, with very few far-off predictions. Many hydrophobic field points are located on hydrophobic atoms (e.g. h1, h9) or in aromatic rings (e.g. h2, h3, h5–7, etc). Interestingly, the model is able to differentiate between hom*ocyclic rings (e.g. h2, h3), where the field point is co-localized with the center of mass of the ring, and heterocyclic rings (e.g. h5–7), where the field point is shifted due to the presence of polar atoms within the ring structure. The right-most molecule in figure 4(c), shows the challenging case of a long aliphatic chain. Most field points were well reproduced but some spatially nearby field points (e.g. h15–18) were compressed into single field points. This behavior is due to the applied clustering algorithm.
For the van der Waals field points, the model achieves a very high precision of 0.930 at a distance of r = 1 Å. However, compared to the other field point types the model predicts slightly fewer ground truth field points (). In figure 4(d) it can be observed that most existing predictions are accurate and only few field points were not predicted.
5.Ablation studies
To study the dependency of prediction performance on graph topology, the same network architecture was trained using a graph with edges defined by a maximum spatial distance of 5 Å and only by the covalent bonds. In figures 5, 11, 14, 17 and tables 4–6 we display the precision and weighted sensitivity for models trained on graphs constructed based on covalent bonds, 5 Å and 7 Å. We note that the model performance is superior for distance based graph construction (both for 5 Å and 7 Å) in comparison to using covalent bonds. Increasing the distance threshold from 5 Å to 7 Å results in a precision gain of about 4 percent points for all except for hydrophobic field points. The weighted sensitivity only slightly improves when increasing the distance threshold.
The main factor for this performance drop is the lack of information flow between spatially close but topologically distant atoms. Figure 6 displays an example for the inferior performance of this model. For example, field point 1 (in figure 6, right) is the result of the negative partial charges of the carbonyl atom and the single aromatic ring. Thus, the graph defined by covalent bonds only is unable to capture this information as those functional groups have a relatively large topological distance, i.e. there is a lack of information flow in the SE(3)-Transformer model. The model instead predicts field point c1 based on the carbonyl atom and c2 based on the aromatic ring, both ignoring the correlative effect of those functional groups. The same can be observed for field point three which is based on the fields from the aromatic ring and secondary amine. This field point is not reproduced by the model based on covalent bonds.
The model based on covalent bonds predicts additional field points c1 and c3 which are not present in the ground truth data. Those field points do not exist in reality as the negative potential from the corresponding carbonyl atoms is largely cancelled by the positive methyl group and positive amine for c1 and c3, respectively. The model that is based on information flow via covalent bonds is unable to correctly capture those physical effects as the atoms with opposite partial charge are not topologically adjacent to each other.
To study the impact of the importance of single descriptors on the model performance, we trained the model separately with one descriptor only. The results of this experiment are shown in tables 7–11 and visualized in figures 7, 12, 15 and 18. We observe that one descriptor seems to be already enough to obtain a decent model performance for all types of field points. For positive and negative electrostatic field points we observe that partial charge and atom type are the strongest single descriptors. For van der Waals field points, the atom type is the most important descriptor which yields a precision of 0.845 and a weighted sensitivity of 0.822. For hydrophobic field points the choice of descriptor does not have a strong impact on the model performance. Those models also have the highest precision among all single-descriptor models.
In a second experiment, we left one descriptor out and trained the model on the 4 remaining descriptors. The results are provided in tables 12–16 and figures 8, 13, 16 and 19. We note that leaving out any descriptor results in lower precision for all field point types compared to the full model. However weighted sensitivity is slightly better for positive field points if leaving out atom size or node degree. Note that there is a trade-off between precision and sensitivity (also cf figure 10). Leaving out the descriptor 'partial charge' leads to significantly worse performance (in both sensitivity and precision) for all field points. The descriptors 'node degree' and 'atom type' seem to be of particular importance for the van der Waals field points. For hydropbobic field points, there seems to be enough redundancy among the descriptors. Only dropping 'partial charge' has a notable effect.
6.Conclusion
We demonstrated the benefits of an equivariant, attention based, graph neural network in the context of molecular field point prediction. A model based on the SE(3)-Transformer was trained using a large set of small molecules. Our model successfully predicts field points of different types of molecular interaction fields. In comparison to current methods of field point prediction that are based on computationally demanding calculations for the interaction energies between a probe and the molecule, our trained model allows for an efficient one-shot prediction of field points. We also demonstrated that field points that are spawned by topologically distant atoms can be reliably predicted if a graph structure was generated that is based on spatial rather than topological context of the molecule. The optimized model will allow in further research the use of physicochemical field point information for similarity based virtual screening on huge databases of compounds.
Acknowledgment
The work was financially supported by the Swiss National Science Foundation (Project Number: 310030_197629).
Data availability statement
The data that support the findings of this study are openly available at the following URL/DOI: https://github.com/hinzflorian/se3transformer_fieldpoint_prediction.
Conflict of interest
The authors declare no competing interests.
Appendix A: Field point calculations
In the following we provide a summary of the methodology for field point calculations as described in Cheeseright et al (2006). Given a molecule conformation of atoms, a grid of 120 points is defined on the slightly reduced solvent-accessible surface of each atom. At each grid point, initially, a probe atom is positioned and its interaction energy with the molecule is optimized using a simplex algorithm. The probe atom is assigned the van der Waals parameters of oxygen, while its charge is adjusted based on the chosen potential (see below). The probe positions on each atom then converge to common extrema. Interaction energy extrema taking values below a certain threshold are filtered out.
The van der Waals field points are calculated by employing a Morse potential to characterize the van der Waals interaction with a neutral probe p as follows
where
with being parameters from the XED force field, is the distance between atom j and the probe p and is the sum of the vdW radii.
Electrostatic positive and negative field points are calculated by assuming a Coulombic interaction for positive and negative probes respectively as follows
where D = 4 is the dielectric constant of the medium and qp is the charge of the probe (taking values ).
The calculation of the hydrophobic field points assumes the following potential
where . The potential constitutes the attractive energy with a neutral probe and reflects the hydrophobicity of a fragment or group. Electronegative atoms are assigned a zero weighting relative to carbon, which signifies low hydrophobicity. On the other hand, hydrogens receive a 0.5 weighting, which reduces their impact without completely nullifying their influence.
Appendix B: Terminology
In the following section, we denote by the direct sum, ⊗ the Kronecker product and are the identity matrices in 3 and 5 dimensions respectively.
Definition 1 (equivariance).Let G be a group and be sets. Let , be group actions of G on X and Y respectively. A map is called equivariant if it satisfies:
Moreover, in the special case of
the map Φ is called invariant.
Terminology 1 (rotation order, fiber structure).A linear group representation of the group of rotations in 3 dimensions SO(3) can be decomposed into irreducible representations of dimensions for . We refer to as the 'rotation order'. The rotation orders can be viewed as scalars and vectors in 3-dimensional space, respectively. If a feature vector v consists of elements (also called 'channels') of rotation orders , respectively, we say that its 'fiber structure' is . Consequently the feature vector v is structured as follows:
Defining
we denote
Moreover, for with we agree to the notation
being a subvector of v consisting only of the first channels of rotation orders and 2 respectively.
Terminology 2 (attention block A).Consider a molecule consisting of atoms with coordinates . Per construction, for each atom we associate an initial feature vector . Thus, the molecule can be represented as a graph , with the set of nodes and E the set of edges connecting nodes. The attention block consists of the following four layers:
1.
ConvSE3 Layer: For all , calculate tensor field convolutions to obtain key and value vectors
where and are tensor field network type embedding matrices.
2.
LinearSE3 Layer: Calculate self-interaction to obtain the query vector
where .
3.
AttentionSE3 Layer: Calculate attention per node. For , let denote the set of indices of neighbors of the i'th node. For , define
4.
LinearSE3 Layer: For , concatenate the fibers of and :
Calculate the output vector
where is a block matrix, consisting of self-interaction submatrices of dimensions , , . More specifically
where . Note that the multiplication with corresponds to a convolution of channels within each rotation order.
Terminology 3 (attention block B).Using the same notation as in 2, let denote the coordinates of the i'th node. The output of 'Attention Block A' results in a feature vector per node . 'Attention Block B' transforms this feature vector to a feature vector of the same fiber structure by applying the following four layers:
1.
ConvSE3 Layer: For all , calculate tensor field convolutions to obtain key and value vectors
where and are tensor field network type embedding matrices.
2.
LinearSE3 Layer: Calculate self-interaction to obtain the query vector
where is a block matrix, consisting of self-interaction submatrices of dimensions , , . More specifically
where . Note that the multiplication with WQ corresponds to a convolution of channels within each rotation order.
3.
AttentionSE3 Layer: Calculate attention per node. For , let denote the set of indices of neighbors of the i'th node. For define
4.
LinearSE3 Layer: For , concatenate the fibers of and
Calculate the output vector
where is a block matrix, consisting of self-interaction submatrices of dimensions , , . More specifically
where . Note that the multiplication with corresponds to a convolution of channels within each rotation order.
Terminology 4 (ConvSE3 Layer C).The output of 'Attention Block B' results in a feature vector (fiber structure ['0': 32, '1': 32,'2': 32]) per node . The final 'ConvSE3 Layer' transforms to the output feature vector of fiber structure ['0': 3, '1': 3] as follows: For define
where is a tensor field network type embedding matrix and is block matrix, consisting of self-interaction submatrices of dimensions and . More specifically
where . Note that the multiplication with Wself corresponds to a convolution of channels within each rotation order.
Appendix C: tables
Table 2.Features per atom for a sample molecule. Abbreviations: N: atom name, A: atom type, P: partial charge, S: atom size, W: Wildman–Crippen logP, D: node degree.
Molecule information | ||||||
---|---|---|---|---|---|---|
Coordinate | N | A | P | S | W | D |
O | 11 | −0.3760 | 1.55 | −0.1526 | 1 | |
C | 2 | 0.2370 | 1.70 | −0.2783 | 3 | |
N | 8 | −0.3420 | 1.60 | 0.1836 | 2 | |
C | 2 | 0.1730 | 1.70 | −0.2783 | 3 | |
N | 8 | −0.3600 | 1.60 | 0.1836 | 2 | |
C | 2 | 0.1480 | 1.70 | −0.2783 | 3 | |
C | 1 | 0.1260 | 1.70 | −0.2051 | 4 | |
N | 9 | −0.1520 | 1.60 | −0.7096 | 3 | |
N | 8 | −0.0130 | 1.60 | 0.1836 | 2 | |
C | 2 | 0.0020 | 1.70 | −0.2783 | 3 | |
H | 15 | 0.1080 | 1.20 | 0 | 1 | |
H | 15 | 0.0700 | 1.20 | 0 | 1 | |
H | 15 | 0.2790 | 1.20 | 0 | 1 | |
H | 15 | 0.1010 | 1.20 | 0 | 1 |
Table 3.Field point information: F-: electrostatic negative, F+: electrostatic positive, FI: van der Waals, FO: hydrophobic.
Field point information | ||
---|---|---|
Coordinate | Field value | Field point type |
−13.796 | F− | |
−4.550 | F− | |
7.845 | F+ | |
3.341 | F+ | |
−1.211 | FO | |
−1.391 | FO | |
−1.856 | FO | |
−1.451 | FO | |
−1.717 | FO | |
−1.542 | FO | |
3.403 | FI | |
1.295 | FI |
Table 4.Results on train set. Model trained on descriptors 'partial charge', 'atom size', 'logP', 'node degree', 'atom type', graph topology based on 7 Å atom distances.
All descriptors, 7 Å graph topology, train set | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.892 | 0.916 | 0.922 | 0.929 | 0.880 | 0.899 | 0.904 | 0.911 | |
0.808 | 0.839 | 0.846 | 0.854 | 0.811 | 0.833 | 0.839 | 0.847 | |
0.804 | 0.905 | 0.931 | 0.945 | 0.871 | 0.924 | 0.939 | 0.948 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.981 | 0.985 | 0.990 | 0.994 | 0.832 | 0.855 | 0.869 | 0.889 | |
0.971 | 0.976 | 0.982 | 0.988 | 0.803 | 0.825 | 0.838 | 0.859 | |
0.939 | 0.976 | 0.987 | 0.993 | 0.895 | 0.941 | 0.961 | 0.974 |
Table 5.Results on test set. Model trained on descriptors 'partial charge', 'atom size', 'logP', 'node degree', 'atom type', graph topology based on 5 Å atom distances.
All descriptors, 5 Å graph topology | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.821 | 0.882 | 0.897 | 0.909 | 0.832 | 0.874 | 0.885 | 0.895 | |
0.739 | 0.803 | 0.820 | 0.832 | 0.771 | 0.813 | 0.824 | 0.834 | |
0.711 | 0.847 | 0.890 | 0.915 | 0.788 | 0.868 | 0.896 | 0.913 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.958 | 0.970 | 0.983 | 0.990 | 0.811 | 0.846 | 0.864 | 0.886 | |
0.947 | 0.960 | 0.973 | 0.983 | 0.786 | 0.818 | 0.835 | 0.856 | |
0.897 | 0.950 | 0.972 | 0.987 | 0.832 | 0.899 | 0.935 | 0.958 |
Table 6.Results on test set. Model trained on descriptors 'partial charge', 'atom size', 'logP', 'node degree', 'atom type', graph topology defined by covalent bonds.
All descriptors, covalent graph topology | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.536 | 0.742 | 0.811 | 0.849 | 0.343 | 0.580 | 0.681 | 0.734 | |
0.475 | 0.671 | 0.737 | 0.775 | 0.320 | 0.524 | 0.612 | 0.661 | |
0.349 | 0.518 | 0.620 | 0.708 | 0.258 | 0.446 | 0.567 | 0.649 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.871 | 0.931 | 0.954 | 0.971 | 0.309 | 0.501 | 0.607 | 0.687 | |
0.839 | 0.900 | 0.924 | 0.948 | 0.314 | 0.483 | 0.576 | 0.649 | |
0.728 | 0.879 | 0.939 | 0.973 | 0.332 | 0.545 | 0.698 | 0.811 |
Table 7.Results on test set. Model trained only with descriptor 'atom type', graph topology based on 7 Å atom distances.
Only descriptor 'atom type' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.814 | 0.879 | 0.896 | 0.909 | 0.818 | 0.868 | 0.883 | 0.896 | |
0.733 | 0.801 | 0.818 | 0.831 | 0.772 | 0.819 | 0.833 | 0.845 | |
0.701 | 0.847 | 0.891 | 0.915 | 0.760 | 0.845 | 0.875 | 0.893 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.966 | 0.977 | 0.987 | 0.993 | 0.822 | 0.855 | 0.871 | 0.892 | |
0.956 | 0.967 | 0.978 | 0.986 | 0.800 | 0.831 | 0.847 | 0.868 | |
0.872 | 0.939 | 0.967 | 0.984 | 0.845 | 0.910 | 0.942 | 0.963 |
Table 8.Results on test set. Model trained only with descriptor 'node degree', graph topology based on 7 Å atom distances.
Only descriptor 'node degree' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.786 | 0.862 | 0.881 | 0.765 | 0.765 | 0.821 | 0.838 | 0.853 | |
0.704 | 0.782 | 0.801 | 0.816 | 0.706 | 0.759 | 0.775 | 0.789 | |
0.677 | 0.825 | 0.871 | 0.899 | 0.748 | 0.840 | 0.874 | 0.894 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.949 | 0.966 | 0.979 | 0.989 | 0.795 | 0.844 | 0.864 | 0.886 | |
0.934 | 0.953 | 0.967 | 0.980 | 0.767 | 0.811 | 0.831 | 0.852 | |
0.881 | 0.942 | 0.968 | 0.983 | 0.810 | 0.889 | 0.928 | 0.954 |
Table 9.Results on test set. Model trained only with descriptor 'atom size', graph topology based on 7 Å atom distances.
Only descriptor 'atom size' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.786 | 0.867 | 0.887 | 0.901 | 0.779 | 0.837 | 0.854 | 0.867 | |
0.708 | 0.790 | 0.810 | 0.824 | 0.723 | 0.776 | 0.791 | 0.804 | |
0.654 | 0.807 | 0.859 | 0.889 | 0.744 | 0.840 | 0.874 | 0.894 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.953 | 0.967 | 0.980 | 0.989 | 0.789 | 0.835 | 0.854 | 0.877 | |
0.941 | 0.955 | 0.969 | 0.981 | 0.762 | 0.803 | 0.822 | 0.845 | |
0.894 | 0.949 | 0.971 | 0.986 | 0.817 | 0.898 | 0.935 | 0.958 |
Table 10.Results on test set. Model trained only with descriptor 'partial charge', graph topology based on 7 Å atom distances.
Only descriptor 'partial charge' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.805 | 0.874 | 0.891 | 0.903 | 0.822 | 0.870 | 0.882 | 0.893 | |
0.722 | 0.792 | 0.810 | 0.822 | 0.759 | 0.805 | 0.817 | 0.828 | |
0.708 | 0.852 | 0.895 | 0.919 | 0.760 | 0.854 | 0.888 | 0.907 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.950 | 0.966 | 0.979 | 0.988 | 0.770 | 0.818 | 0.839 | 0.864 | |
0.936 | 0.952 | 0.967 | 0.979 | 0.742 | 0.786 | 0.805 | 0.830 | |
0.889 | 0.948 | 0.971 | 0.986 | 0.811 | 0.894 | 0.933 | 0.956 |
Table 11.Results on test set. Model trained only with descriptor 'logP', graph topology based on 7 Å atom distances.
Only descriptor 'logP' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.778 | 0.857 | 0.877 | 0.891 | 0.801 | 0.858 | 0.874 | 0.886 | |
0.696 | 0.776 | 0.795 | 0.809 | 0.751 | 0.805 | 0.820 | 0.832 | |
0.666 | 0.817 | 0.868 | 0.899 | 0.729 | 0.825 | 0.861 | 0.882 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.952 | 0.968 | 0.982 | 0.990 | 0.794 | 0.842 | 0.861 | 0.883 | |
0.940 | 0.958 | 0.972 | 0.984 | 0.776 | 0.819 | 0.837 | 0.860 | |
0.878 | 0.940 | 0.966 | 0.983 | 0.810 | 0.893 | 0.933 | 0.957 |
Table 12.Results on test set. Model trained on descriptors 'partial charge', 'atom size', 'logP', 'node degree', graph topology based on 7 Å atom distances.
Without descriptor 'atom type' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.835 | 0.889 | 0.902 | 0.913 | 0.829 | 0.871 | 0.882 | 0.893 | |
0.753 | 0.809 | 0.824 | 0.835 | 0.771 | 0.811 | 0.823 | 0.834 | |
0.737 | 0.866 | 0.905 | 0.926 | 0.801 | 0.878 | 0.904 | 0.919 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.963 | 0.974 | 0.985 | 0.991 | 0.787 | 0.823 | 0.841 | 0.865 | |
0.952 | 0.965 | 0.975 | 0.985 | 0.758 | 0.791 | 0.808 | 0.832 | |
0.898 | 0.951 | 0.973 | 0.987 | 0.857 | 0.922 | 0.949 | 0.967 |
Table 13.Results on test set. Model trained on descriptors 'partial charge', 'atom size', 'logP', 'atom type', graph topology based on 7 Å atom distances.
Without descriptor 'node degree' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.852 | 0.901 | 0.913 | 0.923 | 0.865 | 0.903 | 0.913 | 0.921 | |
0.779 | 0.831 | 0.845 | 0.855 | 0.819 | 0.857 | 0.867 | 0.876 | |
0.745 | 0.865 | 0.901 | 0.921 | 0.788 | 0.863 | 0.890 | 0.906 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.962 | 0.973 | 0.984 | 0.991 | 0.803 | 0.845 | 0.863 | 0.885 | |
0.951 | 0.962 | 0.974 | 0.984 | 0.776 | 0.814 | 0.832 | 0.853 | |
0.903 | 0.952 | 0.973 | 0.987 | 0.823 | 0.898 | 0.936 | 0.959 |
Table 14.Results on test set. Model trained on descriptors 'partial charge', 'logP', 'node degree', 'atom type', graph topology based on 7 Å atom distances.
Without descriptor 'atom size' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.843 | 0.894 | 0.906 | 0.916 | 0.848 | 0.885 | 0.896 | 0.906 | |
0.762 | 0.816 | 0.830 | 0.840 | 0.798 | 0.835 | 0.845 | 0.856 | |
0.744 | 0.873 | 0.910 | 0.930 | 0.803 | 0.878 | 0.902 | 0.916 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.955 | 0.968 | 0.981 | 0.990 | 0.824 | 0.854 | 0.870 | 0.891 | |
0.945 | 0.957 | 0.971 | 0.983 | 0.801 | 0.830 | 0.846 | 0.868 | |
0.903 | 0.953 | 0.974 | 0.987 | 0.874 | 0.925 | 0.950 | 0.967 |
Table 15.Results on test set. Model trained on descriptors 'atom size', 'logP', 'node degree', 'atom type', graph topology based on 7 Å atom distances.
Without descriptor 'partial charge' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.821 | 0.878 | 0.893 | 0.904 | 0.811 | 0.858 | 0.873 | 0.886 | |
0.738 | 0.797 | 0.812 | 0.823 | 0.757 | 0.802 | 0.816 | 0.828 | |
0.723 | 0.856 | 0.897 | 0.920 | 0.777 | 0.861 | 0.890 | 0.908 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.957 | 0.970 | 0.983 | 0.992 | 0.819 | 0.851 | 0.868 | 0.889 | |
0.947 | 0.961 | 0.975 | 0.985 | 0.791 | 0.821 | 0.838 | 0.860 | |
0.887 | 0.945 | 0.968 | 0.984 | 0.853 | 0.912 | 0.943 | 0.963 |
Table 16.Results on test set. Model trained on descriptors 'partial charge', 'atom size', 'node degree', 'atom type', graph topology based on 7 Å atom distances.
Without descriptor 'logP' | ||||||||
---|---|---|---|---|---|---|---|---|
Negative field points | Positive field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.842 | 0.893 | 0.906 | 0.917 | 0.836 | 0.875 | 0.886 | 0.896 | |
0.763 | 0.816 | 0.830 | 0.841 | 0.780 | 0.818 | 0.829 | 0.840 | |
0.750 | 0.872 | 0.909 | 0.930 | 0.811 | 0.883 | 0.908 | 0.922 | |
Hydrophobic field points | Van der Waals field points | |||||||
0.5 | 1 | 1.5 | 2 | 0.5 | 1 | 1.5 | 2 | |
0.956 | 0.969 | 0.981 | 0.990 | 0.823 | 0.857 | 0.874 | 0.895 | |
0.945 | 0.958 | 0.971 | 0.983 | 0.801 | 0.833 | 0.850 | 0.872 | |
0.906 | 0.954 | 0.973 | 0.987 | 0.857 | 0.914 | 0.943 | 0.962 |