Artificial intelligence can enhance research in nuclear molten salt chemistry by predicting experimental outcomes rapidly and efficiently, ultimately reducing the time and cost associated with traditional experimentation methods.
Imagine walking into a laboratory and knowing the outcome of an experiment before even performing it. What about a hundred experimental outcomes? A thousand? How would that influence your decision-making when developing a product? Most of us are familiar with artificial intelligence’s (AI) ability to generate realistic photos, videos, and text chat, but we can leverage this power in another way: by generating novel experimental results.
While this predictive power has been widely embraced in fields like pharmaceuticals and battery research, advancements in nuclear molten salt chemistry still present a significant opportunity for AI integration. Molten salt electrochemical experimentation tends to be very time-consuming and expensive, requires multiple costly pieces of equipment, and demands extensive knowledge to perform accurately – and that is just for one configuration. Now, imagine the investment required to test hundreds or even thousands of different salt compositions and elements.
This is where AI steps in. By utilising past experimental data, we can develop AI models which can rapidly predict different electrochemical responses for various experimental configurations. And the best part? Once the model has been trained and validated, responses can be generated within seconds, not days. This provides researchers a convenient method to approximate countless results before committing to the significant investment of a physical experiment, leading to more informed and efficient experimentation choices.
Quantum computer concept. Wide angle visual for banners or advertisements.
The engine behind the predictions
To be confident in our predictions, it is necessary to understand exactly how those predictions are generated. While machine learning is a vast field, its approaches can generally be categorised into three distinct types: supervised, unsupervised and reinforcement learning. While the current research performed is strictly supervised learning, it is important to know this distinction since each method differs slightly. With supervised learning, inputs, also known as features, are labelled and supplied with targets, which are what we are trying to predict. Supervised networks generally follow four specific steps when training:
- Forward propagation – Moving data through the model and returning a predicted output. Sometimes also referred to as the ‘forward pass.’
- Loss calculation – Comparing the predicted results from the forward pass to the actual results. For supervised learning, the mean squared error is commonly utilised.
- Backpropagation – Calculating the error gradient of the loss function to determine which parameters need adjustment.
- Optimisation – Updating the internal parameters to minimise the total loss via an algorithm such as Adam or gradient descent.
These four steps are performed iteratively, and each iteration is referred to as an epoch. Given enough epochs, quality training data and an appropriate model, the system dynamics will be approximated quite nicely. This approach creates a data-driven model, which is distinct from a traditional physics-based model, as it learns to approximate the underlying physical behaviour purely from historical data and not from simulating physics equations. For example, Fig. 1 displays the true and predicted cyclic voltammogram response of a neural network trained on 4% by weight UCl3 in a LiCl-KCl molten salt data against each other at 798 K.
Fig. 1: Comparison of an experimental cyclic voltammogram and a corresponding AI-generated prediction
for 4 wt% uranium in a LiCl−KCl molten salt at 798 K
The heart of supervised learning is the four-step process, and there are many ways of implementing this. The most straightforward method is via a multilayer perceptron network (MLP), where information flows in one direction, front to back, with the final output being the actual prediction. More complex cases occur with sequential or time series data, where recurrent neural networks (RNN) often shine.
Unlike MLPs, these networks can loop information back on itself, allowing them to process sequences of data and effectively ‘remember’ what happened earlier. This makes them ideal for tasks where the order of information is critical. Common architectures include simple RNNs, long short-term memory neural networks (LSTM) and gated-recurrent unit neural networks (GRU). The architecture selection is mostly decided on by the type of data being processed and the actual goals of the project itself.
How VCU is putting theory into practice
At VCU, our team has actively embraced this AI-driven approach, leveraging powerful computational resources and advanced coding libraries to develop and train multiple neural network models on real-world experimental data. A significant hurdle in this specialised field, however, is the inherent scarcity of high-quality electrochemical data. This led us to design a crucial part of our research: investigating how effectively our models could learn and generalise from limited datasets.
To explore this, historical experimental data from various experimental configurations were utilised to train multiple neural networks of varying architectures. This diverse data was used to train multiple neural network architectures at varying dataset sizes, allowing us to determine not only which models perform best with scarce information but also the minimum amount of data needed to generate reliable results. Furthermore, we challenged the networks with incomplete inputs to see if they could still generate accurate electrochemical responses from less-than-ideal information. Initial results consistently showed that neural networks can approximate electrochemical responses well, even with more limited or incomplete data.
Building upon these initial findings, we performed a comprehensive electrochemical simulation for UCl3 at two different weight concentrations, deploying several different AI models to predict the full range of electrochemical responses. Our initial testing yielded excellent results for both cyclic voltammetry and open circuit potential data. Fig. 2 displays one such example of the 1% UCl3 in LiCl-KCl molten salt, showcasing how accurately our models can approximate these electrochemical responses.
Fig. 2: Comparison of experimental data and AI-generated predictions for the cyclic voltammogram and open-circuit potential of 1 wt% uranium
in a LiCl−KCl molten salt at 748 K
Despite this success with cyclic voltammetry and open circuit potential, electrochemical impedance spectroscopy, another critical electrochemical technique, still presents some challenges. Fig. 3 shows the true and generated Nyquist plots and illustrates this point by showing the less-than-ideal agreement between predicted and actual data.
Fig. 3: A Nyquist plot comparing the experimental and AI-predicted electrochemical impedance
spectroscopy data for 1 wt% uranium in LiCl−KCl molten salt at 748 K
Paving the way for future innovations
Though substantial progress has been made, there is still much work to be done. While our AI models show immense promise in predicting responses for many electrochemical techniques, such as cyclic voltammetry and open circuit potential, other areas remain active frontiers. Electrochemical impedance spectroscopy presents a significant challenge due to its complex, frequency-dependent responses. Mastering its prediction demands even more sophisticated modelling and extensive, high-quality training data to fully capture its nuances. By expanding our predictive capabilities across this wider array of techniques, we aim to further reduce the need for costly, time-consuming physical experiments and allow researchers to explore a vast parameter space digitally.
Looking ahead, we would like to extend this predictive capability beyond single elemental molten salt systems. By tackling the complexities of multi-component molten salts, we can unlock a more complete understanding of the intricate interactions occurring within these critical systems and extract key system parameters from them, such as their thermodynamic and transport properties.
Furthermore, we would also like to extend this predictive power towards other critical techniques such as laser-induced breakdown spectroscopy. This method holds immense promise in the nuclear molten salt world, making it a prime target for AI integration as it can provide rapid insights into elemental composition.
Ultimately, our work seeks to drastically accelerate the fundamental understanding and development of molten salt technologies, empowering researchers to quickly and efficiently pinpoint the most impactful experiments and accelerate real-world applications.
References
- A. ZHANG et al., Dive into Deep Learning, Cambridge University Press (2023).
- S. RAKHSHAN POURI, “COMPARATIVE STUDIES OF DIFFUSION MODELS AND ARTIFICIAL NEURAL INTELLIGENCE ON ELECTROCHEMICAL PROCESS OF U AND Zr DISSOLUTIONS IN LiCl-KCl EUTECTIC SALTS,” Theses and Dissertations (2017); https://doi.org/10.25772/60S6-TY60.
- T. P. LILLICRAP et al., “Backpropagation and the brain,” Nat Rev Neurosci 21 6, 335, Nature Publishing Group (2020); https://doi.org/10.1038/s41583-020-0277-3.
Please note, this article will also appear in the 23rd edition of our quarterly publication.