Guest Blogger: Dr. David Freed

In the new decade engineering simulation will be hugely impacted by, and become inextricably entwined with, machine learning / artificial intelligence (ML/AI).  I believe we will see this initially emerge in two ways: (1) ML/AI will be used to make simulation work better, and (2) simulation will be used to generate synthetic data to train ML/AI, in other words, simulation-powered ML/AI. Possible examples of (1) include ML/AI applied to mesh generation, dynamic mesh refinement, simulation cost prediction, transient simulation convergence detection, and model setup / input error checking. These are common challenges for simulation practitioners both novice and expert where improvements will add value by making simulation more efficient, reliable, and accessible.

 

For example, when doing a new type of simulation and/or analyzing a new (geometrically different) model, it can be hard to know a priori the minimum cluster size needed (usually determined by the amount of RAM required), how long it will take, and the compute resource cost (typically measured in core-hours). These important requirements are often found through trial and error after multiple failures, such as the solver crashing due to not enough RAM, or termination by the user due to the simulation taking too long and so the setup needing to be modified. An accurate prediction of the RAM, time, and cost needed for a new simulation could be achieved with an ML/AI algorithm trained on a database of previous simulations. By eliminating the trial and error and giving the user accurate information up front, a simulation estimator would use ML/AI to make simulation work better.

Turning to point (2), the idea here is to replace or augment physical test results with simulation results, i.e. “synthetic data”, for ML/AI algorithms that require such data for training. For example, consider the use of sensors and ML/AI to predict equipment failure, like a pipe, rotor blade, or dam. The sensors can measure physical properties like strain, temperature, and vibration in various locations. The goal is to train the ML/AI algorithm to predict in real time whether the sensor readings indicate failure is imminent and mitigating action should be taken. To work well, the ML/AI needs a large set of training data that covers an appropriate range of possible conditions, all sensor readings for those conditions, and whether or not failure occurs for those conditions. The time and cost needed to generate a sufficient data set from physical testing can be impractical or even prohibitive for many applications. On the other hand, engineering simulation, such as thermal-mechanical analysis of the pipe, blade, or dam, can generate the required training data for as many conditions as needed, limited only by the available compute power. This assumes simulations that are sufficiently realistic, as well as a compute environment allowing a large number of simulations to be run in parallel. Both of these requirements, along with tremendously powerful out-of-the-box ML/AI tools, are becoming more and more readily available – therefore I believe we will see an explosion of simulation-powered ML/AI for a variety of fascinating and compelling use cases.

I think over time a third type of interaction between engineering simulation and ML/AI will emerge in the form of novel algorithms that tightly combine traditional physics simulation methods with ML/AI routines. This could take the form of replacing various geometric regions of a simulation with much faster but sufficiently accurate ML/AI models; this seems especially promising for regions that have limited communication with other regions, such that only a small amount of information is passed between, in effect using the ML/AI as a reduced-order model. Perhaps certain complex physics behaviors within a simulation, such as some types of CFD boundary conditions, can be improved using ML/AI. I expect many other novel forms of interplay between established numerical methods and ML/AI to emerge in the coming years.

Finally, it may be possible to represent a 3D model geometry as a set of attributes suitable to be included within ML/AI training data. Then by running many simulations for many models, it might be possible to generate enough training data for the ML/AI to successfully predict the desired KPIs (key performance indicators), thereby effectively replacing the need for new simulations even for a new geometry. This likely would work only for specific, well-constrained applications, such as fracture analysis of a pipe weld. However, given the ever-increasing compute power available for simulation and the pace of ML/AI development, I see real potential for this approach to greatly improve the speed, accessibility, and value of engineering analysis across a wide range of applications.

I’m definitely excited to witness the next decade of innovations, advances, and business trends for simulation. At OnScale, we plan to actively participate in the advancement of simulation and expect AI to have a big role both for our activities and for the CAE community at large.


Are you interested in providing us with a guest blog post?
Contact us here and let us know what you’re thinking!