Modifying Large Language Models for Directed Chemical Space Exploration (2024)

Joseph M. Cavanagh1, Kunyang Sun1, Andrew Gritsevskiy2, Dorian Bagni1, Thomas D. Bannister3,
Teresa Head-Gordon1,4,5†

Abstract

Here we show that a Large Language Model (LLM) can serve as a foundation model for a Chemical Language Model (CLM) which performs at or above the level of CLMs trained solely on chemical SMILES string data. Using supervised fine-tuning (SFT) and direct preference optimization (DPO) on the open-source Llama LLM, we demonstrate that we can train an LLM to respond to prompts such as generating molecules with properties of interest to drug development. This overall framework allows an LLM to not just be a chatbot client for chemistry and materials tasks, but can be adapted to speak more directly as a CLM which can generate molecules with user-specified properties.

1Kenneth S. Pitzer Theory Center and Department of Chemistry,

University of California, Berkeley, CA, 94720 USA

2Department of Computer Science, University of Wisconsin–Madison, Madison, WI, 53706

3Department of Molecular Medicine, The Herbert Wertheim UF Scripps Institute for Biomedical Innovation and Technology, 130 Scripps Way, Jupiter, FL, 33458

4Departments of Bioengineering and Chemical and Biomolecular Engineering,

University of California, Berkeley, CA, 94720 USA

5Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720 USA

\dagger thg@berkeley.edu

1 Introduction

Language Models (LMs) are statistical models of probability distributions of units of language, and can be adapted to autoregressively generate text by sampling from these distributions1. Although LMs were originally adapted for the domain of natural language, in recent years, chemical language models (CLMs) trained on different string representations of small molecules have emerged as a useful tool for de novo generation of molecules, especially those with potential pharmaceutical applications 2, 3, 4, 5, 6, 7. Among the most notable one dimensional representations of molecules are Simplified Molecular-Input Line-Entry System (SMILES) strings 8 and SELF-referencing Embedded Strings (SELFIES) 9. Early work using CLMs for molecular generation has also explored different combinations of string representations with model architectures, including recurrent neural networks (RNNs) using long short term memory (LSTM) cells 10, 11, generative pre-trained transformers (GPTs)12, 13 and structured state space sequence (S4) models14, 15. While these models have achieved advances in molecular generation tasks, downstream optimization of molecules, especially for properties useful in medicinal chemistry, requires additional training or different model frameworks 5, 6, 16, 17, 18. Aside from model architecture development, many recent advancements of LMs have resulted from training scaled-up transformers 19 on massive amounts of data. These Large Language Models (LLMs) such as GPT4 and Llama show capabilities across a surprisingly broad range of natural language processing tasks 20, 21, 22.

Roughly speaking, there are four main families of methods for steering the outputs of a pre-trained LLM. The first is through prompt engineering, wherein a pre-trained LLM is given a task description, possibly with examples, in its context window23. This does not provide the LLM with any additional knowledge (except for the description and any examples), but rather extracts knowledge obtained through pre-training. The second approach is supervised fine-tuning (SFT), in which the weights of a LLM are further optimized on task-specific data. However, fine-tuning rarely alters the model’s underlying capabilities, and is better mechanistically described as a “wrapper” which efficiently elicits existing ability24. The third is through scoring the outputs of a LLM and updating the weights such that the language model is more likely to produce higher-reward outputs; approaches include directly using reinforcement learning (REINFORCE)25, Reinforcement Learning through Human Feedback (RLHF)26, 27, and Direct Preference Optimization (DPO)28. Fourth and finally, researchers can use interpretability techniques—either by discovering monosemantic features corresponding to human-interpretable concepts, and then modifying these to guide the model towards specific behaviors, or through using activation addition of relevant “steering vectors”29, 30, 31, 32, 33. These various methods are not mutually exclusive, and can be combined with each other to achieve even better results. In this study, we focus on all but the last of these methods to modify LLMs into CLMs.

Here, we demonstrate the usefulness of SFT and DPO by converting an open-weight LLM, Meta-Llama-3.1-8B-Instruct (which we will also refer to as "Llama" for the remainder of the paper), into a useful chemical language model22. Our methods here consist of two steps. First, supervised fine-tuning (SFT) of Llama yields a model, SmileyLlama, with the ability to generate drug-like molecules that have properties specified in a prompt. Second, we use DPO to optimize our model’s outputs towards one or multiple SMILES-based scoring functions, yielding a new model, SmileyLlama-Opt. This lets us improve the reliability of our model for generating molecules with certain properties. Separately, we investigate the use of DPO to grant SmileyLlama the ability to explore new, useful regions of chemical space such as the binders of specific proteins, including those not represented in the original dataset.

2 Methods

2.1 Supervised Fine-Tuning

The procedure we used for fine-tuning Llama is simple and extensible. We constructed our fine-tuning dataset with the SMILES strings of approximately 2 million molecules from ChEMBL Dataset v28 34. For each molecule in our dataset, we randomly picked a number of properties of pharmaceutical interests to calculate using RDKit 35. These properties include ranges of hydrogen-bond donors, hydrogen-bond acceptors, molecular weight, logP, number of rotatable bonds, the fraction of sp3𝑠superscript𝑝3sp^{3}italic_s italic_p start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT carbons (i.e., the number of sp3𝑠superscript𝑝3sp^{3}italic_s italic_p start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT carbons divided by the total number of carbon atoms in the molecule, termed Fsp3𝐹𝑠superscript𝑝3Fsp^{3}italic_F italic_s italic_p start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT), the presence/absence of macrocycles, the presence/absence of covalent warhead-related SMARTS patterns, the presence/absence of at least one undesirable SMARTS pattern, a SMILES string representation of a BRICS single-pass substructure and the specification of a chemical formula.

For numerical properties, we choose ranges that are relevant for medicinal chemistry, recognizing a distinction between chemical diversity space and drug-like chemical space, where molecules must have suitable characteristics of drugs related not just to their shape but also to relevant biological phenomena (e.g., oral absorption, metabolism, distribution). For example, drug design starting molecules are typically smaller in size and lipophilicity than their follow-on analogs and optimized drugs, so a user can specify that a model obeys the Rule of Three for fragment-based drug discovery when generating molecules36. As another example, we choose the TPSA ranges based on those which tend to be orally bioavailable or able to pass through the placenta or blood-brain-barrier37. If a drug need not meet these criteria (e.g., an injectable drug for a peripheral target) then a user can adjust the TPSA range criterion.

After calculating and picking these properties for each SMILES string, we construct a prompt containing values of these properties, with the "correct" completion being the SMILES string that these properties were calculated from. This trains our model to generate molecules that have properties specified in the prompt. See Figure 1 and Algorithm 1 in the Supporting Information (SI) section 2 for other representations of this method not discussed in the main text below. Further information about the specifics of these properties and the ranges we choose to specify during training can be found in the SI section 1.

2.2 Direct Preference Optimization

First, we use DPO to improve our model’s ability to robustly generate molecules with properties specified in the prompt. We prompt our fine-tuned model to generate molecules with a given property, such as 3 or fewer H-Bond donors. We sample several SMILES strings and use RDKit to assess whether they have the correct properties. We pair up molecules which correctly followed the prompt with those that don’t as winners and losers, respectively, then use a single epoch of DPO to improve the model’s results. See Algorithm 2 in SI section 2 for pseudocode of this scoring and pairing procedure.

We also use Direct Preference Optimization to add four new capabilities to SmileyLlama, namely QED and the binding to GSK3β𝐺𝑆𝐾3𝛽GSK3\betaitalic_G italic_S italic_K 3 italic_β, JNK3, and DRD2 as assessed by machine learning models implemented by TDCommons’ oracle38, 39, 40, 41. To do this, we prompt the model to create molecules with the property of a ‘High QED/GSK3B/JNK3/DRD2’, sampling 2000 of these SMILES strings. This gives us 8,000 responses in total, with 2,000 generated for each target. We enforce diversity here, discarding any string which is identical to another. Discarding redundant SMILES strings here has two purposes. First, it gives us the best use of a set number of oracle calls. Second, we found that this helps ameliorate crashing diversity in later epochs due to DPO causing the model to simply memorize the molecules it sees most often, although it doesn’t completely solve this issue. We then score these responses with the TDCommons oracle corresponding to the property specified in the prompt and pair each response with a random different response to the same prompt. We identify the ’winner’ as the molecule with the greater score and the ’loser’ as the molecule with the lesser score. We then run a single epoch of DPO before repeating the process. It should be emphasized that we did not train separate models for each objective— we simply specified which score (QED/GSK3B/JNK3/DRD2) our model is supposed to optimize in the prompt. We report the results for 20 epochs of this procedure. See Algorithm 3 in SI section 2 for pseudocode of the preparation of each iteration of DPO.

2.3 Training Details

We performed both SFT and DPO on Llama using the Axolotl Package22, 42. For both SFT and DPO, we use the Low-Rank Adaption (LoRA) applied to the linear layers of the model and FlashAttention with an Adam optimizer, cross-entropy loss, and a cosine learning rate scheduler with a maximum learning rate of 2×1052superscript1052\times 10^{-5}2 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT.43, 44, 45, 46 Additional parameters for our training are a LoRA rank of 32, a LoRA alpha of 16, a LoRA dropout of 5%, and 10 warmup steps. We use the accelerate and deepspeed packages to improve the efficiency of our training47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77. For SFT, we trained for 1 epoch using a micro batch size of 4 on a single 4xA40 node for approximately 1 day. We also construct another dataset to train on from the same list of SMILES strings from ChEMBL, but with re-randomized hints.

We performed both SFT and DPO on Llama-3.1-8B-Instruct to create SmileyLlama and SmileyLlama-Opt as described here. For SFT we used a prompt with a system instruction of You love and excel at generating SMILES strings of drug-like molecules and a user instruction of the form Output a SMILES string for a drug like molecule with the following properties: if properties are specified, or Output a SMILES string for a drug like molecule: when no properties are specified. We structure the prompts used for SFT so that, during inference, the properties of generated molecules can be specified in the prompt. This lets users avoid having to downselect the vast majority of generated molecules for having the correct characteristics. Figure 1 shows the overall workflow for the SFT procedure for SmileyLlama.

Modifying Large Language Models for Directed Chemical Space Exploration (1)

3 Results

3.1 Temperature dependence and distribution of molecule properties

We begin our investigation of SmileyLlama by generating molecules adjusting the temperature hyperparameter. We sample 10,000 molecules at various temperatures without properties specified in our prompt. We generate these samples independently from each other and in parallel, rather than letting the model generate 10,000 molecules sequentially. This is both to avoid potential biases introduced by earlier molecules in the context and to speed up inference due to the quadratic cost of an attention head. In Figure 2a, we find that the optimal temperature is between 0.6 and 1.4, depending on the relative importance of diversity and validity. The largest percentage of SMILES strings which were valid, novel, and non-redundant occurred at T=1.1. This is a departure from typical LLMs, where generating many short, unique responses is typically not required and the ideal temperature is typically 0.6–0.822.

We also investigate the distribution of properties of generated molecules and compare these to the ChEMBL training set. We find that, for the majority of properties, the two are remarkably similar, as shown in Figure S1. The primary exception here is QED (Figure 2b), where the generated molecules tend to score slightly higher than those of the training set.

Modifying Large Language Models for Directed Chemical Space Exploration (2)

3.2 Comparison of SmileyLlama with few-shot Llama and other CLMs

To test the generation ability of Llama-FT with other existing methods, we used the GuacaMol suite79 to benchmark the validity, uniqueness, and novelty of the molecules. Additionally, KL divergence and Frechet Chemnet Distance (FCD)80 are used to analyze the distributional shifts from the ChEMBL training data. In Table 1, we show that SFT significantly improves SmileyLlama’s ability to generate drug-like molecules.

BenchmarkValidityUniquenessNoveltyKL divFCD
Llama zero-shot (T=0.6)0.88670.08300.14580.68740.0000
Llama zero-shot (T=1.1)0.70300.64550.90060.81320.0054
SmileyLlama (T=0.6)0.99680.93560.91130.89410.2925
SmileyLlama 2xSFT (T=0.6)0.99670.92790.88350.89810.3542
SmileyLlama (T=1.1)0.97830.99940.97130.97600.8369
SmileyLlama 2xSFT (T=1.1)0.98310.99900.96150.98490.8488
LSTM150.98280.99920.84760.99300.9006
GPT150.91460.99950.97760.97850.8263
S4 150.97120.99670.96060.99420.8526

We also investigated the ability of Llama-3.1-8B-Instruct to produce molecules with no fine-tuning by providing it with zero, one, and few-shot prompts from the ChEMBL database. We find that even without SFT or examples provided in a prompt, Llama is able to produce accurate SMILES strings, although far from the level of the fine-tuned model or other state-of-the-art CLMs. As shown in Table S1, we find that the more examples we provided in the prompt, the more unique and novel molecules will typically be generated, at the cost of losing validity. In contrast, SmileyLlama, with the user instruction of Output a SMILES string for a drug like molecule:, generated SMILES strings with no examples given in the prompt (zero-shot) which saves on computational cost during inference while providing outputs on par with or better than state-of-the-art CLMs. We also find that while doing a second round of SFT on a second dataset from the same ChEMBL source improves the validity and uniqueness somewhat, it’s not anywhere near as consequential as the first and slightly hurts novelty, as shown in Table 1.

3.3 Property Specification

We next test SmileyLlama’s ability to generate molecules with one of any of the criteria we used for SFT (ranges are given in the SI), with a panel of 380 tasks. One example is a case with a specific number of H-bond donors and acceptors, a task it was never trained on; as a reminder, SmileyLlama was trained on examples corresponding to generation of a molecule from a range of H-bond donors. We also test our model’s ability to generate molecules from all 320 substructures in the Enamine database81 and to generate molecules that follow the Lipinski rule-of-five and rule-of-three82. We sampled 1,000 SMILES strings from each prompt, with the exception of the Lipinski rules for which we sampled 5,000 SMILES strings. Optimal performance was typically found at T=0.6 to 0.8.

In Table 2 we show the average percentage of valid and unique SMILES strings generated for each family of specified properties. We note that for the English language, ’unique’ has a somewhat ambiguous meaning. The number of unique items can refer to both the number of items, not counting duplicates, or the number of items which do not have a duplicate. We mean the former when we refer to the proportion of valid, unique elements— in python, this proportion would be len(set(valid_items))/len(items) and the percentage would be the proportion times 100%. We find that our SFT model typically does well on some tasks, with the exception of those it wasn’t trained on such as exact H-bond donors and Lipinski rules. Fine-tuning on twice the amount of data did not seem to greatly affect the performance on these benchmarks (2xSFT).

Property1xSFT, T=0.62xSFT,T=0.6DPO, T=1.1
\leq k H-bond donors95.7%96.0%97.1%
\leq k H-bond acceptors93.4%94.4%97.8%
\leq k molecular weight76.8%78.5%97.0%
\leq k ClogP79.2%79.8%96.0%
Exactly k H-bond donors21.4%21.7%30.4%
Enamine Substructures48.5%50.3%70.9%
Lipinski rule-of-five65.3%66.7%84.9%
Rule-of-three42.8%44.0%92.8%

We further optimized SmileyLlama for this task using DPO. DPO’s most popular application has been in improving the responses of LLM-derived ChatBots, such as the instruct-tuned Llama3 models, but it has also found use in improving the outputs of CLMs83. Here the relevance of DPO provides a way for us to further optimize the model by pairing desirable responses with undesirable responses. The model’s weights are then updated to be more likely to produce the ‘winner’ of the pairing and less likely to produce the ‘loser’ of the pairing. This avoids the need to separately train a reward model28. We generated our dataset by simply pairing up unsuccessful attempts at generating structures with successful attempts randomly. This gives us the minimum of (successful results, failed results) pairings for each task. We lump these pairings into a single dataset and optimize for 1 epoch using DPO. In this case, there were about 76,000 examples; DPO took about 3 hours and 30 minutes on a single 2xA40 node. The new DPO-optimized model, SmileyLlama-Opt, significantly improved results on the benchmark across the board, albeit with a higher optimal temperature than the optimal temperature for the SFT models as seen in Table 2.

3.4 Optimizing for target affinities and implicit multi-objective optimization

Finally we consider DPO for generating unique and valid ligands which bind to specific protein targets. In this case the SmileyLlama model is prompted to generate molecules with a high score for the four objectives from TDCommons: QED and binding to the drug target proteins GSK3B, JNK3, and DRD2. DPO training involved pairing each output smiles string with another, random string, leading to 2000 random pairings per task-epoch, with winners and losers assigned based on the higher score. This paring is directly shown in Algorithm 3 of SI section 2.

As seen in Figure 3a the SmileyLlama model doesn’t initially understand the task at hand, so the median scores, aside from QED, start very low but rises substantially after similar-to\sim10 epochs such that unique, generated molecules rises above 0.5 in every score, which is the threshold for predicted activity for binding to GSK3B and JNK3.84 As shown in Figure 3b, however, diversity decreases shortly after the scores plateau, but typically remain above 50%. Increasing temperature causes the model to require more epochs for the same improvement, but postpones and attenuates the crashing of diversity, as shown in Figure S2 in the SI section 4.

Modifying Large Language Models for Directed Chemical Space Exploration (3)

A surprising corollary to this is the finding that SmileyLlama-TDC-DPO can combine its knowledge gained during single-objective optimization to perform well at a task specifying multiple objectives. We find that we can can elicit this by combining the prompts used during direct preference optimization. For instance, in Figure 3c we show the distributions of QED and DRD2 scores of 400 molecules generated by SmileyLlama-TDC-DPO using three different prompts. We find that even though our model was not trained on the generation of molecules with both a high QED and a high DRD2 score, it was able to combine the individuated training to yield predicted molecules with both attributes.

Discussion and Conclusion

There have been other efforts to train LLMs on chemistry-focused data resulting in a model which can, among other tasks, translate between natural and chemical language 85, 86. Some more explicit attempts to elicit molecular generation capabilities from LLMs have used in-context learning and prompt engineering to generate molecules or optimize them for certain characteristics 87. Reinforcement learning has also been explored as a way to optimize LLMs’ generation for targets 88. There have also been studies of using in-context learning on either a specialized LLM or a general purpose LLM after SFT to generate and optimize molecules for targets with intriguingly successful results89, 87, 90, 91.

These advances show promise in using LLMs for molecular generation in drug discovery, but our study clarifies a few crucial points in this, going forward. First, it is not necessary to pretrain a specialized model on chemistry-specific text to generate molecules from a text description; a much less resource-intensive SFT training run + DPO on prompt-following on a dataset of a few million molecules is enough to do that. Second, DPO provides a resource-efficient way of optimizing the model to produce molecules that score well on a SMILES-based objective with zero examples or in-context learning. Third, training the same model using DPO on multiple individual criteria optimizes the model explicitly for a set of single-objectives, and implicity for a combination of objectives which can be specified in a prompt. This sets the stage for new, far more data-and-compute efficient ways for multi-objective optimization in CLMs. Rather than having to optimize a model’s output in 2nsuperscript2𝑛2^{n}2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT tasks, we could have a model with outputs optimized in n𝑛nitalic_n tasks, then combine the prompts used to specify these tasks in many different combinations. This property, if it continues to hold in the limit of more tasks, could help specify something like a single model, trained on a large variety of targets, which can be used as something like a “foundation model” for molecular generation tasks, having enough latent ability to optimize for a wide range of objectives that it can be later fine-tuned for the purposes of excelling at a specific molecular generation task. It would also be useful to be able to specify where, exactly, on the Pareto frontier a model should "aim for" when generating molecules that score well on multiple objectives. Such questions could serve as useful further research directions.

In summary, we have investigated several aspects of the adaptation of LLMs to problems that have been in the domain of CLMs. The property that has been key to all of the special advantages here is one of prompting- that LLMs can be fine-tuned and optimized with DPO to follow different tasks given different prompts. Information about the region of interest in chemical space can be specified on the prompt, letting the model generate structures in this region. Training on a variety of prompts on a large model allows for specialization in multiple tasks, and there is some evidence to suggest that the models can correctly generalize to scenarios with new prompts that leverage the abilities that the model developed when trained on similar prompts.

There are still a wide variety of unexplored possibilities in this space. Using the methods provided above, others may be able to allow an LLM to take in more useful forms of input in its prompt, such as data specific to the protein. A multimodal LLM could take advantage of information for which text cannot represent well. Alternatively, the framework for modifying LLMs to explore specific regions of chemical space introduced in this paper could also be leveraged for molecular design outside of drug discovery. As with many of the fields touched by large language models in the early 2020s, the newly opened frontier of possibility is as vast as it is exciting.

Acknowledgments

This work was supported in part by the National Institute of Allergy and Infectious Disease grant U19-AI171954 for the drug molecule application. We thank the CPIMS program, Office of Science, Office of Basic Energy Sciences, Chemical Sciences Division of the U.S. Department of Energy under Contract DE-AC02-05CH11231 for support of the machine learning. We’d like to thank Riza Özçelik for kindly providing the retraining code for different CLMs for benchmarking, Yingze Wang for providing the list of undesirable SMARTS substructures, and Nicole Kennedy for suggesting properties of molecules useful to medicinal chemists.

References

  • Rosenfeld 2000Rosenfeld,R. Two decades of statistical language modeling: where do we go from here? Proceedings of the IEEE 2000, 88, 1270–1278.
  • Cao and Kipf 2018Cao,N.D.; Kipf,T. MolGAN: An implicit generative model for small molecular graphs. ArXiv 2018, abs/1805.11973, null.
  • Tong etal. 2021Tong,X.; Liu,X.; Tan,X.; Li,X.; Jiang,J.; Xiong,Z.; Xu,T.; Jiang,H.; Qiao,N.; Zheng,M. Generative Models for De Novo Drug Design. Journal of medicinal chemistry 2021, null, null.
  • Flam-Shepherd etal. 2021Flam-Shepherd,D.; Zhu,K.; Aspuru-Guzik,A. Language models can learn complex molecular distributions. Nature Communications 2021, 13, null.
  • Skinnider etal. 2021Skinnider,M.; Stacey,R.; Wishart,D.; Foster,L. Chemical language models enable navigation in sparsely populated chemical space. Nature Machine Intelligence 2021, 3, 759 – 770.
  • Blaschke etal. 2020Blaschke,T.; Arús-Pous,J.; Chen,H.; Margreitter,C.; Tyrchan,C.; Engkvist,O.; Papadopoulos,K.; Patronov,A. REINVENT 2.0: An AI Tool for De Novo Drug Design. Journal of chemical information and modeling 2020, null, null.
  • Grisoni 2023Grisoni,F. Chemical language models for de novo drug design: Challenges and opportunities. Current opinion in structural biology 2023, 79, 102527.
  • Weininger 1988Weininger,D. SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules. J. Chem. Inf. Comput. Sci. 1988, 28, 31–36.
  • Krenn etal. 2019Krenn,M.; Hase,F.; Nigam,A.; Friederich,P.; Aspuru-Guzik,A. Self-referencing embedded strings (SELFIES): A 100.
  • Hochreiter and Schmidhuber 1997Hochreiter,S.; Schmidhuber,J. Long Short-Term Memory. Neural Computation 1997, 9, 1735–1780.
  • Gupta etal. 2017Gupta,A.; Müller,A.T.; Huisman,B. J.H.; Fuchs,J.A.; Schneider,P.; Schneider,G. Generative Recurrent Networks for De Novo Drug Design. Molecular Informatics 2017, 37.
  • Radford etal. 2018Radford,A.; Narasimhan,K.; Salimans,T.; Sutskever,I. Improving Language Understanding by Generative Pre-Training. 2018; https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.
  • Bagal etal. 2021Bagal,V.; Aggarwal,R.; Vinod,P.K.; Priyakumar,U. MolGPT: Molecular Generation Using a Transformer-Decoder Model. Journal of chemical information and modeling 2021, 62, 2064–2076.
  • Gu etal. 2022Gu,A.; Goel,K.; Ré,C. Efficiently Modeling Long Sequences with Structured State Spaces. 2022; https://arxiv.org/abs/2111.00396.
  • Özçelik etal. 2024Özçelik,R.; deRuiter,S.; Criscuolo,E.; Grisoni,F. Chemical language modeling with structured state space sequence models. Nature Communications 2024, 15, 6176.
  • Wang etal. 2023Wang,Y.; Zhao,H.; Sciabola,S.; Wang,W. cMolGPT: A Conditional Generative Pre-Trained Transformer for Target-Specific De Novo Molecular Generation. Molecules 2023, 28, null.
  • Zhou etal. 2019Zhou,Z.; Kearnes,S.; Li,L.; Zare,R.N.; Riley,P. Optimization of Molecules via Deep Reinforcement Learning. Scientific Reports 2019, 9, 10752, Published: 24 July 2019.
  • Li etal. 2024Li,J.; Zhang,O.; Sun,K.; Wang,Y.; Guan,X.; Bagni,D.; Haghighatlari,M.; Kearns,F.L.; Parks,C.; Amaro,R.E.; Head-Gordon,T. Mining for Potent Inhibitors through Artificial Intelligence and Physics: A Unified Methodology for Ligand Based and Structure Based Drug Design. Journal of Chemical Information and Modeling 2024,
  • Vaswani etal. 2023Vaswani,A.; Shazeer,N.; Parmar,N.; Uszkoreit,J.; Jones,L.; Gomez,A.N.; Kaiser,L.; Polosukhin,I. Attention Is All You Need. 2023; https://arxiv.org/abs/1706.03762.
  • Radford etal. 2018Radford,A.; Wu,J.; Child,R.; Luan,D.; Amodei,D.; Sutskever,I. Language Models are Unsupervised Multitask Learners. 2018,
  • 21OpenAI etal. GPT-4 Technical Report. http://arxiv.org/abs/2303.08774.
  • 22Dubey,A. etal. The Llama 3 Herd of Models. http://arxiv.org/abs/2407.21783.
  • 23Brown,T.B. etal. Language Models are Few-Shot Learners. http://arxiv.org/abs/2005.14165.
  • Jain etal. 2023Jain,S.; Kirk,R.; Lubana,E.S.; Dick,R.P.; Tanaka,H.; Grefenstette,E.; Rocktäschel,T.; Krueger,D.S. Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks. 2023.
  • Ahmadian etal. 2024Ahmadian,A.; Cremer,C.; Gallé,M.; Fadaee,M.; Kreutzer,J.; Pietquin,O.; Üstün,A.; Hooker,S. Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs. 2024; https://arxiv.org/abs/2402.14740.
  • 26Ziegler,D.M.; Stiennon,N.; Wu,J.; Brown,T.B.; Radford,A.; Amodei,D.; Christiano,P.; Irving,G. Fine-Tuning Language Models from Human Preferences. http://arxiv.org/abs/1909.08593.
  • Christiano etal. 2023Christiano,P.; Leike,J.; Brown,T.B.; Martic,M.; Legg,S.; Amodei,D. Deep reinforcement learning from human preferences. 2023; https://arxiv.org/abs/1706.03741.
  • 28Rafailov,R.; Sharma,A.; Mitchell,E.; Ermon,S.; Manning,C.D.; Finn,C. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. http://arxiv.org/abs/2305.18290.
  • Templeton etal. 2024Templeton,A. etal. Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet. Transformer Circuits Thread 2024,
  • Turner etal. 2024Turner,A.M.; Thiergart,L.; Leech,G.; Udell,D.; Vazquez,J.J.; Mini,U.; MacDiarmid,M. Activation Addition: Steering Language Models Without Optimization. 2024; https://arxiv.org/abs/2308.10248.
  • Mack and Turner 2024Mack,A.; Turner,A. Mechanistically Eliciting Latent Behaviors in Language Models. AI Alignment Forum 2024, https://www.alignmentforum.org/posts/ioPnHKFyy4Cw2Gr2x/mechanistically-eliciting-latent-behaviors-in-language-1.
  • Bricken etal. 2023Bricken,T.; Templeton,A.; Batson,J.; Chen,B.; Jermyn,A.; Conerly,T.; Turner,N.; Anil,C.; Denison,C.; Askell,A.; others Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread 2023, 2.
  • Cunningham etal. 2023Cunningham,H.; Ewart,A.; Riggs,L.; Huben,R.; Sharkey,L. Sparse Autoencoders Find Highly Interpretable Features in Language Models. 2023; https://arxiv.org/abs/2309.08600.
  • 34Gaulton,A.; Bellis,L.J.; Bento,A.P.; Chambers,J.; Davies,M.; Hersey,A.; Light,Y.; McGlinchey,S.; Michalovich,D.; Al-Lazikani,B.; Overington,J.P. ChEMBL: a large-scale bioactivity database for drug discovery. 40, D1100–D1107.
  • Landrum 2016Landrum,G. RDKit: Open-Source Cheminformatics Software. 2016,
  • 36Jhoti,H.; Williams,G.; Rees,D.C.; Murray,C.W. The ’rule of three’ for fragment-based drug discovery: where are we now? 12, 644–644, Publisher: Nature Publishing Group.
  • 37Veber,D.F.; Johnson,S.R.; Cheng,H.-Y.; Smith,B.R.; Ward,K.W.; Kopple,K.D. Molecular Properties That Influence the Oral Bioavailability of Drug Candidates. 45, 2615–2623, Publisher: American Chemical Society.
  • Huang etal. 2021Huang,K.; Fu,T.; Gao,W.; Zhao,Y.; Roohani,Y.; Leskovec,J.; Coley,C.W.; Xiao,C.; Sun,J.; Zitnik,M. Therapeutics Data Commons: Machine Learning Datasets and Tasks for Drug Discovery and Development. Proceedings of Neural Information Processing Systems, NeurIPS Datasets and Benchmarks 2021,
  • Huang etal. 2022Huang,K.; Fu,T.; Gao,W.; Zhao,Y.; Roohani,Y.; Leskovec,J.; Coley,C.W.; Xiao,C.; Sun,J.; Zitnik,M. Artificial intelligence foundation for therapeutic science. Nature Chemical Biology 2022,
  • Velez-Arce etal. 2024Velez-Arce,A.; Huang,K.; Li,M.; Lin,X.; Gao,W.; Fu,T.; Kellis,M.; Pentelute,B.L.; Zitnik,M. TDC-2: Multimodal Foundation for Therapeutic Science. bioRxiv 2024,
  • Gao etal. 2022Gao,W.; Fu,T.; Sun,J.; Coley,C. Sample efficiency matters: a benchmark for practical molecular optimization. Advances in Neural Information Processing Systems 2022, 35, 21342–21357.
  • 42Lian,W. axolotl. URL https://github.com/axolotl-ai-cloud/axolotl/tree/main. https://github.com/axolotl-ai-cloud/axolotl/tree/main.
  • 43Hu,E.J.; Shen,Y.; Wallis,P.; Allen-Zhu,Z.; Li,Y.; Wang,S.; Wang,L.; Chen,W. LoRA: Low-Rank Adaptation of Large Language Models. http://arxiv.org/abs/2106.09685.
  • Dao etal. 2022Dao,T.; Fu,D.Y.; Ermon,S.; Rudra,A.; Ré,C. FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. Advances in Neural Information Processing Systems (NeurIPS). 2022.
  • Dao 2024Dao,T. FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning. International Conference on Learning Representations (ICLR). 2024.
  • 46Kingma,D.P.; Ba,J. Adam: A Method for Stochastic Optimization. http://arxiv.org/abs/1412.6980.
  • Gugger etal. 2022Gugger,S.; Debut,L.; Wolf,T.; Schmid,P.; Mueller,Z.; Mangrulkar,S.; Sun,M.; Bossan,B. Accelerate: Training and inference at scale made simple, efficient and adaptable. https://github.com/huggingface/accelerate, 2022.
  • Rajbhandari etal. 2019Rajbhandari,S.; Rasley,J.; Ruwase,O.; He,Y. ZeRO: memory optimizations toward training trillion parameter models. arXiv preprint arXiv:1910.02054 2019,
  • Rasley etal. 2020Rasley,J.; Rajbhandari,S.; Ruwase,O.; He,Y. DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020.
  • Zhang and He 2020Zhang,M.; He,Y. Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping. Advances in Neural Information Processing Systems. 2020.
  • Ren etal. 2021Ren,J.; Rajbhandari,S.; Aminabadi,R.Y.; Ruwase,O.; Yang,S.; Zhang,M.; Li,D.; He,Y. ZeRO-Offload: Democratizing Billion-Scale Model Training. arXiv preprint arXiv:2101.06840 2021,
  • Tang etal. 2021Tang,H.; Gan,S.; Awan,A.A.; Rajbhandari,S.; Li,C.; Lian,X.; Liu,J.; Zhang,C.; He,Y. 1-bit Adam: Communication Efficient Large-Scale Training with Adam’s Convergence Speed. International Conference on Machine Learning. 2021.
  • Rajbhandari etal. 2021Rajbhandari,S.; Ruwase,O.; Rasley,J.; Smith,S.; He,Y. ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning. arXiv preprint arXiv:2104.07857 2021,
  • Li etal. 2022Li,C.; Awan,A.A.; Tang,H.; Rajbhandari,S.; He,Y. 1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB’s Convergence Speed. IEEE 28th International Conference on High Performance Computing, Data, and Analytics. 2022.
  • Li etal. 2022Li,C.; Zhang,M.; He,Y. The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models. Advances in Neural Information Processing Systems. 2022.
  • Lu etal. 2022Lu,Y.; Li,C.; Zhang,M.; DeSa,C.; He,Y. Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam. arXiv preprint arXiv:2202.06009 2022,
  • Rajbhandari etal. 2022Rajbhandari,S.; Li,C.; Yao,Z.; Zhang,M.; Aminabadi,R.Y.; Awan,A.A.; Rasley,J.; He,Y. DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale. International Conference on Machine Learning. 2022.
  • Smith etal. 2022Smith,S.; Patwary,M.; Norick,B.; LeGresley,P.; Rajbhandari,S.; Casper,J.; Liu,Z.; Prabhumoye,S.; Zerveas,G.; Korthikanti,V.; others Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model. arXiv preprint arXiv:2201.11990 2022,
  • Wu etal. 2022Wu,X.; Yao,Z.; Zhang,M.; Li,C.; He,Y. Extreme Compression for Pre-trained Transformers Made Simple and Efficient. Advances in Neural Information Processing Systems. 2022.
  • Yao etal. 2022Yao,Z.; Aminabadi,R.Y.; Zhang,M.; Wu,X.; Li,C.; He,Y. ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers. Advances in Neural Information Processing Systems. 2022.
  • Aminabadi etal. 2022Aminabadi,R.Y.; Rajbhandari,S.; Zhang,M.; Awan,A.A.; Li,C.; Li,D.; Zheng,E.; Rasley,J.; Smith,S.; Ruwase,O.; He,Y. DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. 2022.
  • Yao etal. 2022Yao,Z.; Wu,X.; Li,C.; Holmes,C.; Zhang,M.; Li,C.; He,Y. Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers. arXiv preprint arXiv:2211.11586 2022,
  • Li etal. 2022Li,C.; Yao,Z.; Wu,X.; Zhang,M.; He,Y. DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing. arXiv preprint arXiv:2212.03597 2022,
  • Wu etal. 2023Wu,X.; Li,C.; Aminabadi,R.Y.; Yao,Z.; He,Y. Understanding INT4 Quantization for Transformer Models: Latency Speedup, Composability, and Failure Cases. International Conference on Machine Learning. 2023.
  • Zawad etal. 2023Zawad,S.; Li,C.; Yao,Z.; Zheng,E.; He,Y.; Yan,F. DySR: Adaptive Super-Resolution via Algorithm and System Co-design. International Conference on Learning Representations. 2023.
  • Shen etal. 2023Shen,S.; Yao,Z.; Li,C.; Darrell,T.; Keutzer,K.; He,Y. Scaling Vision-Language Models with Sparse Mixture of Experts. Findings of the Association for Computational Linguistics: EMNLP 2023. 2023.
  • Anthony etal. 2023Anthony,Q.; Awan,A.A.; Rasley,J.; He,Y.; Shafi,A.; Abduljabbar,M.; Subramoni,H.; Panda,D. MCR-DL: Mix-and-Match Communication Runtime for Deep Learning. IEEE International Parallel and Distributed Processing Symposium. 2023.
  • Singh etal. 2023Singh,S.; Ruwase,O.; Awan,A.A.; Rajbhandari,S.; He,Y.; Bhatele,A. A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training. Proceedings of the 37th International Conference on Supercomputing. 2023.
  • Wang etal. 2023Wang,G.; Qin,H.; Jacobs,S.A.; Wu,X.; Holmes,C.; Yao,Z.; Rajbhandari,S.; Ruwase,O.; Yan,F.; Yang,L.; He,Y. ZeRO++: Extremely Efficient Collective Communication for Giant Model Training. arXiv preprint arXiv:2306.10209 2023,
  • Golnari etal. 2023Golnari,P.A.; Yao,Z.; He,Y. Selective Guidance: Are All the Denoising Steps of Guided Diffusion Important? arXiv preprint arXiv:2305.09847 2023,
  • Wu etal. 2023Wu,X.; Yao,Z.; He,Y. ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats. arXiv preprint arXiv:2307.09782 2023,
  • Yao etal. 2023Yao,Z.; Aminabadi,R.Y.; Ruwase,O.; Rajbhandari,S.; Wu,X.; Awan,A.A.; Rasley,J.; Zhang,M.; Li,C.; Holmes,C.; others DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales. arXiv preprint arXiv:2308.01320 2023,
  • Song etal. 2023Song,S.L.; Kruft,B.; Zhang,M.; Li,C.; Chen,S.; Zhang,C.; Tanaka,M.; Wu,X.; Rasley,J.; Awan,A.A.; others DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies. arXiv preprint arXiv:2310.04610 2023,
  • Yao etal. 2023Yao,Z.; Wu,X.; Li,C.; Youn,S.; He,Y. ZeroQuant-V2: Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation. arXiv preprint arXiv:2303.08302 2023,
  • Xia etal. 2024Xia,H.; Zheng,Z.; Wu,X.; Chen,S.; Yao,Z.; Youn,S.; Bakhtiari,A.; Wyatt,M.; Zhuang,D.; Zhou,Z.; Ruwase,O.; He,Y.; Song,S.L. FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design. arXiv preprint arXiv:2401.14112 2024,
  • Jacobs etal. 2024Jacobs,S.A.; Tanaka,M.; Zhang,C.; Zhang,M.; Aminadabi,R.Y.; Song,S.L.; Rajbhandari,S.; He,Y. System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models. arXiv preprint 2024,
  • Lian etal. 2024Lian,X.; Jacobs,S.A.; Kurilenko,L.; Tanaka,M.; Bekman,S.; Ruwase,O.; Zhang,M. Universal Checkpointing: Efficient and Flexible Checkpointing for Large Scale Distributed Training. arXiv preprint arXiv:2406.18820 2024,
  • Hunter 2007Hunter,J.D. Matplotlib: A 2D graphics environment. Computing in Science & Engineering 2007, 9, 90–95.
  • Brown etal. 2019Brown,N.; Fiscato,M.; Segler,M.H.; Vaucher,A.C. GuacaMol: Benchmarking Models for de Novo Molecular Design. Journal of Chemical Information and Modeling 2019, 59, 1096–1108.
  • Preuer etal. 2018Preuer,K.; Renz,P.; Unterthiner,T.; Hochreiter,S.; Klambauer,G. Fréchet ChemNet distance: a metric for generative models for molecules in drug discovery. J. Chem. Inform. Model. 2018, 58, 1736–1741.
  • 81Enamine Essential Fragment Library. https://enamine.net/compound-libraries/fragment-libraries/essential-library, Accessed: 2024-08-23.
  • Lipinski etal. 2001Lipinski,C.A.; Lombardo,F.; Dominy,B.W.; Feeney,P.J. Experimental and computational approaches to estimate solubility and permeability in drug discovery and development settings1PII of original article: S0169-409X(96)00423-1. The article was originally published in Advanced Drug Delivery Reviews 23 (1997) 3–25.1. Advanced Drug Delivery Reviews 2001, 46, 3–26, Special issue dedicated to Dr. Eric Tomlinson, Advanced Drug Delivery Reviews, A Selection of the Most Highly Cited Articles, 1991-1998.
  • 83Park,R.; Theisen,R.; Sahni,N.; Patek,M.; Cichońska,A.; Rahman,R. Preference Optimization for Molecular Language Models. http://arxiv.org/abs/2310.12304.
  • Jin etal. 2020Jin,W.; Barzilay,R.; Jaakkola,T. Multi-Objective Molecule Generation using Interpretable Substructures. 2020; https://arxiv.org/abs/2002.03244.
  • Edwards etal. 2022Edwards,C.; Lai,T.; Ros,K.; Honke,G.; Cho,K.; Ji,H. Translation between Molecules and Natural Language. 2022; http://arxiv.org/abs/2204.11817, arXiv:2204.11817 [cs].
  • Pei etal. 2023Pei,Q.; Zhang,W.; Zhu,J.; Wu,K.; Gao,K.; Wu,L.; Xia,Y.; Yan,R. BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations. ArXiv 2023, abs/2310.07276, null.
  • Wang etal. 2024Wang,H.; Skreta,M.; Ser,C.-T.; Gao,W.; Kong,L.; Strieth-Kalthoff,F.; Duan,C.; Zhuang,Y.; Yu,Y.; Zhu,Y.; Du,Y.; Aspuru-Guzik,A.; Neklyudov,K.; Zhang,C. Efficient Evolutionary Search Over Chemical Space with Large Language Models. 2024; http://arxiv.org/abs/2406.16976, arXiv:2406.16976 [physics].
  • Ahmed and Elattar 2024Ahmed,S.J.; Elattar,M.A. Improving Targeted Molecule Generation through Language Model Fine-Tuning Via Reinforcement Learning. 2024; http://arxiv.org/abs/2405.06836, arXiv:2405.06836 [cs, q-bio].
  • 89Guevorguian,P.; Bedrosian,M.; Fahradyan,T.; Chilingaryan,G.; Khachatrian,H.; Aghajanyan,A. Small Molecule Optimization with Large Language Models. http://arxiv.org/abs/2407.18897, version: 1.
  • Bhattacharya etal. 2024Bhattacharya,D.; Cassady,H.; Hickner,M.; Reinhart,W. Large Language Models as Molecular Design Engines. 2024; https://chemrxiv.org/engage/chemrxiv/article-details/664c98ea418a5379b0e07d31.
  • Liu etal. 2024Liu,X.; Guo,Y.; Li,H.; Liu,J.; Huang,S.; Ke,B.; Lv,J. DrugLLM: Open Large Language Model for Few-shot Molecule Generation. ArXiv 2024,
Modifying Large Language Models for Directed Chemical Space Exploration (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Rev. Leonie Wyman

Last Updated:

Views: 5263

Rating: 4.9 / 5 (59 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Rev. Leonie Wyman

Birthday: 1993-07-01

Address: Suite 763 6272 Lang Bypass, New Xochitlport, VT 72704-3308

Phone: +22014484519944

Job: Banking Officer

Hobby: Sailing, Gaming, Basketball, Calligraphy, Mycology, Astronomy, Juggling

Introduction: My name is Rev. Leonie Wyman, I am a colorful, tasty, splendid, fair, witty, gorgeous, splendid person who loves writing and wants to share my knowledge and understanding with you.