Call for Abstracts
Abstract submissions extended to June 18, 2023.
A complete electronic submission of your abstract must include the following:
- Each abstract can only be submitted to and be affiliated with a particular mini-symposium (topic).
- The title should be less than 20 words. Avoid acronyms, and use sentence case capitalization (e.g., This is my abstract title without acronyms).
- Abstracts should only contain text and is limited to 400 words.
- The submitting author will be automatically designated as the corresponding author.
- Include the presenting author full name, title, and email. Identify with a * after the last name.
- Add any co-author(s) full name and email.
- Use the link above to submit your abstract online. The link will take you to the Morressier abstract submission portal. Account registration with Morressier will be required.
- Send any questions to MMLDE2023@UTEP.EDU.
Abstracts will be published in the form of an online and searchable proceedings.
I. Multiscale Materials and Engineered Systems
Machine Learning and Multiscale Modeling for Complex Materials and Structures
- Ying Li, University of Wisconsin-Madison
- Krishnan Suresh, University of Wisconsin-Madison
- Hongyi Xu, University of Connecticut
The rapid development of computational technologies in artificial intelligence (AI) and machine learning (ML) has started to revolutionize many aspects of our lives, while also significantly changing the way computational modeling and simulation are performed. Indeed, ML and other intelligent statistics techniques extend the applicability of computational mechanics, molecular modeling, topology optimization, and structural design, for instance, by combining physics-based simulations and data-based inference. In this mini-symposium, we aim to provide a forum for the latest developments in applying AI-based technologies, such as ML in applied mechanics, materials, and engineering problems in general. We welcome all contributions, with particular interests in these areas:
- Applications of computational data science to design of materials at micro and meso scales.
- ML approaches to molecular dynamics and finite element methods.
- AI-based methods and approaches to additive manufacturing and 3D printing of complex materials.
- Data-driven methods for design, synthesis, and characterization of polymers and their composites.
- AI-based approaches to materials characterization and analysis.
Hybrid methods in topology optimization.
- Amanda Howard, Pacific Northwest National Laboratory
- Panos Stinis, Pacific Northwest National Laboratory
- Nathaniel Trask, Sandia National Laboratories
Recent work has seen a surge of results in scientific machine learning for multiscale systems by allowing for exploitation of the physical properties of the system. This minisymposium aims to explore the intersection of advanced physics-informed machine learning and its applications for engineering systems. We will focus on the latest methods for modeling multiscale systems, including multimodal training of machine learning systems, neural operators, and continual learning.
- Oliver Weeger, TU Darmstadt
- Fadi Aldakheel, Leibniz University Hannover
- Miguel Bessa, Brown University
- Nikolaos Bouklas, Cornell University
- WaiChing Sun, Columbia University
The advent of advanced manufacturing and materials technologies now provides the capabilities to architect microstructured materials such as 3D printed lattice structures, fiber-reinforced or multiphase composites, foams, electro- or magneto-active polymers, etc. The mechanical and multifunctional behaviors of these metamaterials can be tailored to their specific engineering applications and are often highly nonlinear, anisotropic, inelastic, and multiphysical. Thus, classical constitutive models are typically not flexible enough to model their effective material behavior in multiscale and multiphysics simulations, while concurrent multiscale approaches are inherently computationally expensive and slow. Thus, in recent years, the formulation of constitutive models using highly flexible machine learning and surrogate modeling methods such as artificial neural networks and deep learning, Gaussian processes, radial basis functions, clustering methods, etc. has gained momentum. Nevertheless, many challenges remain to be addressed for machine learning-based material models, such as their accuracy, reliability and physical soundness, their efficiency, the consideration of parametric dependencies or uncertainties, etc.
This minisymposium welcomes contributions on the state-of-the-art of machine learning methods for multiscale and multiphysics materials modeling. In particular, the areas of interest include, but are not limited to:
- Material models based on feed-forward, deep, recurrent, convolutional, graph and other types of neural networks, or Gaussian processes, radial basis functions, clustering methods, etc.
- Models for elastic, as well as dissipative, inelastic (elasto-plastic, visco-elastic, etc.), and multiphysically coupled (electro, magneto, thermo, chemo, mechanical, etc.) material behaviors
- Physics-enhanced/informed/augmented machine learning methods for thermodynamically consistent, physically, and mathematically sound material models
- Consideration of parametric dependencies, uncertainties, adaptivity, error estimates, etc. in machine learning methods for material modeling
- Efficient implementation and application of machine learning methods for multiscale and multiphysics simulations
- Yun-Che Wang, National Cheng Kung University
Mechanical metamaterials are artificially designed materials containing microstructural unit cells with or without the activation of internal micro-processes. They have been demonstrated to exhibit auxeticity, negative Poisson’s ratio, negative thermal expansion, chirality, acoustic activity, parity anomaly, etc. Through morphogenesis, the metamaterials can achieve functionalities that can be hardly found in conventional materials. These unconventional properties may provide unprecedented opportunities in real-world applications for noise reduction, vibration isolation or others. Theoretical understanding of such materials requires higher-order and/or higher-grade mathematical models, such as the Cosserat mechanics or strain gradient elasticity. This minisymposium aims to discuss all machine-learning aspects of the mechanical metamaterials, such as microstructure-sensitive design, effects of micro-processes, characterization of unusual properties, or practical applications. Extension to using data from physical experiments to train machine learning models is also welcome.
- Manav Manav, ETH Zurich
- Pietro Carrara, ETH Zurich
Data-driven techniques are boosting many branches of computational solid mechanics, thanks to the introduction of several innovative tools with or without models, with or without supervised learning, interpretable or black box models etc. This is driven and catalyzed by the nowadays customary availability of large datasets of materials observations produced by full-field experimental measurements and/or high-fidelity computations. Moreover, recent revolutionary advances in machine learning allow to directly integrate epistemic laws encoding physical knowledge into the training or discovery process, thereby decreasing the data dependency while improving accuracy and physical consistency. The vast potential of synergistic approaches coupling data-driven and computational mechanics techniques is not yet fully deployed. Nevertheless, the first evidence suggest that they can unlock a deeper comprehension of the processes defining the material behavior by introducing new modeling paradigms. Possible future applications include material characterization, model development, novel methods for solving boundary value problems in mechanics and surrogate models providing field quantities of interest, among others.
This mini-symposium aims to bring together researchers working on the development or application of data-driven and machine learning methods in computational solid mechanics. Topics of interest include, but are not limited to:
- Model-free data-driven computational mechanics
- Supervised and unsupervised data-driven discovery of constitutive laws
- Data-driven/Physics-informed learning of surrogate models
- Ramin Bostanabad, UCI
- Shiguang Deng, UCI
- Audrey Olivier, USC
- Wei Chen, Northwestern
The recent technological advancement shave drastically accelerated the analysis, design, and deployment of highly complex systems such as super-compressible materials, autonomous vehicles, novel microelectronics, bio-inspired under-water vehicles, and origami-inspired sensors with tunable sensitivities. The objective of this mini symposium is to highlight the recent advancements in probabilistic modeling of such systems while considering one or more uncertainty sources such as lack of high-fidelity data, noise, and model-form errors due to missing physics or approximations. Topics of interest include (but are not limited to) multi-fidelity modeling, adaptive sampling and resource allocation, inverse parameter estimation, quantifying model form errors, and reducing identifiability issues.
II. Digital Twins in Aerospace and Defense
- Alex Gorodetsky, University of Michigan
- Cosmin Safta, Sandia National Laboratories
Future applications of digital twins of complex digital systems require novel decision making and control strategies for an asset using rapid models running online. Furthermore, as data is obtained for the asset operating in the real world, the underlying models upon which decisions are assessed must be updated. For example, next generation aerospace vehicles may include rapidly changing conditions due to unexpected changes in the operating environment or the vehicle structure. Control laws that are used to operate these vehicles must learn and update their decision-making strategies as the situation changes. Performing such updates in (faster) than real-time poses enormous computational challenges to many state-of-the-art data assimilation algorithms. In this minisymposium we invite speakers working on both parameter and state estimation algorithms and applications where model learning is critically important for enabling real-time decisions operations.
- Mark Veyette, Lockheed Martin Space
As more powerful processors are made available in low-size, weight, and power (SWaP), ruggedized, and radiation-tolerant packages, it becomes feasible to deploy large AI/ML models to edge nodes for processing data at the sensor. AI/ML processing on edge platforms from drones to satellites allows for advanced data analysis even when limited communications precludes ground in-the-loop processing. This minisymposium welcomes submissions on AI/ML at the edge for Aerospace and Defense applications and the algorithms, software, and hardware that supports them.
- Ahsan Choudhuri, UTEP
- Jack Chessa, UTEP
- Melvin Redmond, LMC
- Angel Flores-Abad, UTEP
- Joel Quintana, UTEP
- John Bird, UTEP
III. Machine Learning Finite Element and Numerical Methods
- Minglang Yin, Department of Biomedical Engineering, Johns Hopkins University
- Minliang Liu, Department of Biomedical Engineering, Georgia Institute of Technology
- Lu Lu, Department of Chemical and Biomolecular Engineering, University of Pennsylvania
Cardiovascular diseases continue to be the leading cause of death in the US and worldwide. Modern cardiovascular research has increasingly recognized the significance of computational modeling in understanding multiscale and multiphysics mechanisms of cardiovascular-related (patho-)physiologies, in interpreting an array of experimental and clinical data, in disease prognosis and interventional planning, and among many others. However, primary challenges in this regard arise from but are not limited to 1) developing mechanistic models for complex systems with multiscale and multiphysics features, 2) quantification of patient-specific model parameters from non-invasive clinical measurements, and 3) overwhelming computational costs in disease prognosis and design optimization of prostheses. There is a pressing need to develop novel methodologies to address the above challenges. Recently, progress in high-resolution multimodality imaging and clinical data in conjunction with machine learning and artificial intelligence has enabled new avenues for advancing cardiovascular modeling. This minisymposia will bring together scientists, engineers, and applied mathematicians working across various domains to provide a platform for discussing and presenting state-of-the-art machine learning techniques in cardiovascular modeling. Topics include but are not limited to:
- Surrogate modeling for electrophysiology and multiphysics/multiscale modeling for the cardiovascular system;
- Data-driven modeling and machine learning for precision medicine, digital twins, and design optimization;
- Multimodal machine learning—models that incorporate information from various sources: imaging, ECG, genetics, molecular, etc;
- Machine learning for noninvasive inference of patient-specific model parameters;
- Uncertainty quantification in cardiovascular modeling.
- Peng Du, Northwestern Polytechnical University
- Haibao Hu, Northwestern Polytechnical University
- Feng Ren, Northwestern Polytechnical University
Machine learning has been a hot spot in recent years. The utilization of machine learning methods in computational fluid dynamics (CFD) is a trend and is very prospective in both fundamental research and engineering applications. This symposium aims to bring together the most leading advances in this field. The participants are encouraged to share their new works and communicate actively about these topics. We hope to promote the development of this field, to stimulate new thinking and ideas about the machine learning methods in CFD.
- Pavan Pranjivan Mehta, SISSA, International School for Advanced Studies
- Marta D'Elia, Meta Reality Labs
- Gianluigi Rozza, SISSA, International School for Advanced Studies
Fractional calculus is a generalized form of the integer-order calculus. While an integer-order derivative is a local operator, a fractional derivative is a nonlocal operator. The notion of Brownian motion is extended to admit Levy stable process in the case of fractional diffusion. Further, nonlocal calculus is a generalized form of fractional calculus, which describes an even wider class of diffusion processes. Real-world applications of nonlocal models include turbulence, biology, visco-elasticity, fracture mechanics, finance, and plasma physics. Until recently, fractional and nonlocal equations did not receive much attention, so that many fundamental questions on modeling and simulating nonlocal problems remain unanswered.
This minisymposium will report on recent advances in numerical analysis, methods and algorithms (including machine learning), and real-world applications of fractional and nonlocal equations; thereby shedding some light on the aforementioned open questions.
- Yoshitaka Wada, Kindai University
- Yasushi Nakabayashi, Toyo University
- Masao Ogino, Daido University
- Akio Miyoshi, Insight Inc.
- Shinobu Yoshimura, University of Tokyo
Application of artificial intelligence technology in the field of computational mechanics has been established for a long time. However, many examples of applying deep learning technology currently dominating the world to computational mechanics have not been reported yet. The objective of this mini-symposium is to discuss how to apply artificial intelligence such as deep and machine learning technologies to computational mechanics. We warmly welcome anything related to computational mechanics or artificial intelligence toward uniting both technologies into significant and beneficial applications. Particularly by using deep learning, it is necessary to discuss examples that make it possible to simulate objects that were difficult to simulate in the past, or to improve the accuracy of simulations that have been done in the past.
- Teeratorn Kadeethum, Sandia National Laboratories
- Sanghyun Lee, Florida State University
- Nikolaos Bouklas, Cornell University
- Hongkyu Yoon, Sandia National Laboratories
High-fidelity solvers (e.g., finite difference, finite volume, or finite element) have been widely used to approximate partial differential equations (PDEs) solutions with given physical parameters and boundary conditions. However, these methods are mainly utilized to target specific physics where one requires to solve only a few times with fixed parameters, boundary conditions, or geometries. On the other hand, large-scale inverse problems, optimization, or control often require predicting physics with many different setups (i.e., an extensive set of simulations must be explored). Because of the high computational costs of high-fidelity models, they sometimes hinder these operations or, in some cases, render them impractical.
Recently, the use of scientific machine learning to enhance, accelerate, or assist high-fidelity solver performance has been proposed. Some examples include:
- Improving the efficacy and convergence rate of high-fidelity solvers
- Guiding dynamic mesh refinement.
- Fine-tuning stabilization parameters for discretization schemes and iterative solvers.
This mini-symposium invites presentations on advancements in using machine learning to enhance high-fidelity solvers. The mini-symposium will bring together researchers working on fundamental and applied aspects of intersections between high-fidelity solvers and emerging machine-learning algorithms, as well as provide a forum for discussion, interaction, and assessment of their presented techniques.
- Ting Song, ExxonMobil Upstream Research Company
- Majid Rashtbehesht, ExxonMobil Upstream Research Company
- Marcelo DallAqua, ExxonMobil Upstream Research Company
- Dakshina Valiveti, ExxonMobil Upstream Research Company
Title: Physics-informed machine learning: applications in energy industry
Development of fast and efficient solvers is critical in addressing large-scale and challenging problems in the energy industry. Physics-informed machine learning is an emerging field that combines machine learning with conservation equations and has shown significant potential in applications where conventional methodologies lack the desired computational efficiency. They have also demonstrated a tractable approach to the computationally intensive uncertainty quantification procedures - an integral aspect of the real world engineering applications arising from the noisy nature of the measured data. To date, methods such as Physics-Informed Neural Networks (PINNs), Deep Operator Networks (DeepONets) and Fourier Neural Operators (FNOs) are among some of the recent breakthroughs in the scientific computing community that aim to address some of the long standing computational challenges. The key objective is to design neural network surrogate forward and/or inverse models based on simulation and measured data or an alternative hybrid approach which additionally incorporates information from constraints arising from the governing physical laws.
We invite authors to present their work on applications of novel physics informed machine learning methods to oil and gas, geothermal, carbon storage, and other energy industry applications. Topics can include, but not limited to, seismic wave propagation, geophysical inversions, mechanics of porous medium, subsurface fracture and fault mechanics, multiphase flow in wells and pipelines, reservoir simulations, offshore riser mechanics, CO2 storage capacity and plume formation.
- Brendan Keith, Brown University
- Somdatta Goswami, Brown University
- Yue Yu, Leigh University
The finite element method (FEM) is the dominant and archetypical numerical method for computer simulations in engineering mechanics. Its power lies in the method's modularity, scalability, and rigorous mathematical foundation. Meanwhile, FEM's widespread adoption is driven by its versatility and accuracy when used to simulate complicated structures in an overwhelming variety of physical modeling scenarios. Despite decades of FEM-based research, most practical implementations rely on parameter decisions and heuristics that greatly affect performance. At the same time, current simulation environments rarely seek to exploit the full range of problem-specific information that can be used to optimize efficiency and, ultimately, achieve peak performance.
As the research landscape has shifted toward data-driven technologies, it has become apparent that various aspects of FEM simulations can be significantly improved with machine learning techniques, thus leading to a new hybrid field at the interface of deep learning, high-performance computing, and engineering. The purpose of this minisymposium is to showcase recent work that explores the cutting-edge of this rapidly expanding field.
The data-driven methodology can be employed to design strategies and methods to interface neural networks with FEM by leveraging the fundamental data structure in mechanical balance equations. Additionally, in situations where there is no satisfactory empirical model to accurately describe a certain phenomenon (incomplete information of the underlying governing equations and/or boundary and initial conditions), but enough data are available, machine learning techniques can be used to design the data-driven models. The increasing availability of data and advances in computing power and data-driven algorithms have resulted in a trend toward making integrating numerical methods like FEM and data-driven models for decisions making and predictions in fields as diverse as engineering and life sciences. To that end, intelligent surrogate models can be employed at the highest level to replace finite element models and perform efficient computations. Intelligent constitutive models could be constructed at the most basic level to provide flexibility, modularity, and simple integration.
Even though many research initiatives in these domains are underway, many concerns remain unanswered, particularly in engineering mechanics problems involving nonlinear partial differential equations. This session will bring together researchers in deep learning and numerical analysis who are using various augmented learning techniques to simplify and improve the efficiency and efficacy of integrated solvers. This minisymposium also attempts to bridge the theoretical-practical divide by allowing scholars and practitioners to share ideas and discuss and evaluate existing theories and results.
Acknowledging similar trends in a variety of classical numerical methods besides FEM (e.g., finite volume, finite difference, isogeometric, and spectral methods), we invite submissions on all innovative topics related to the synthesis of deep learning and classical numerical methods in mechanics, including but not limited to:
- Solution transferability between FEM and deep learning algorithms.
- Developing constitute models based on data.
- Accelerating numerical simulations by data-driven techniques.
- Biliana Paskaleva, Sandia National Laboratories
- Pavel Bochev, Sandia National Laboratories
- Paul Kuberry, Sandia National Laboratories
Circuit simulations, often referred to as Spice simulations, are foundational to modern circuit design. Currently, the prevalent approach is to build full-featured circuit models by using compact device models and enforcing Kirchhoff's laws on a user-defined network. This approach, though, suffers from two potential development and performance bottlenecks. The first one is prompted by the traditional compact model development approach, which uses combinations of empirical formulas and simplified solutions to semiconductor transport equations. It also relies on human expertise and is an expensive, time-consuming effort, often requiring highly skilled experts combining knowledge of solid-state physics, circuit design, model calibration, and numerical analysis. Besides the long development times, compact models do not always generalize well and adding new physics may require redeveloping the model from scratch. The second bottleneck stems from the fact that modern circuits can have thousands, even millions of components leading to very large full-featured circuit models that are computationally too expensive for use in a multi-query design analysis setting.
Data-driven approaches such as model order reduction, non-intrusive operator inference, dynamic mode decomposition, and deep neural network regression, to name a few, have the potential to overcome these bottlenecks by (i) providing the means to automate the development of compact semiconductor device models, either directly from data or from full-featured TCAD (technology computer-aided design) device models, and (ii) enabling the development of computationally efficient surrogates for full-featured circuit models.This session will focus on recent advances in the development of data-driven and machine learned models for devices and circuits, as well as their integration in Spice simulations.
- Zhuo Zhuang, Department of Engineering Mechanics, Tsinghua University, Beijing, China
- Shan Tang, Department of Engineering Mechanics, Dalian University of Technology, Beijing, China
- Yanping Lian, Institute of Advanced Structure Technology, Beijing Institute of Technology, Beijing, China
- Zhanli Liu, Department of Engineering Mechanics, Tsinghua University, Beijing, China
With the steady development of computer science, machine learning and data science have made significant progress in recent decades. These techniques generally rely on a substantial amount of data samples to extract the abstract mapping hidden within the data. Hence, these technologies have gradually attracted the attention of researchers in the field of computational mechanics and computational engineering. This mini-symposium aims at bringing together mechanicians, computer scientists, and industrial researchers to promote research and application in big data analysis, data driving computing and artificial intelligence in engineering as well as the scientific exchanges among scientists, practitioners, and engineers in affiliated disciplines.
The topics of interest are, but not limited to:
- Data-driven based constitutive modelling;
- Machine learning based solutions of PDEs;
- Big data for design and optimization;
- Data-driven simulation techniques;
- Data-driven techniques in multi-scale and multi-physics simulations;
- Data-driven techniques for continuous and discrete method;
- Machine learning based modeling for additive manufacturing (including process, microstructure, thermal stress, and mechanical properties).
- Rajat Arora, Advanced Micro Devices, Inc. (AMD)
- Ankit Shrivastava, Sandia National Laboratories
- Prashant K. Jha, University of Texas at Austin
Recent advancements in Machine Learning (ML) algorithms are opening new possibilities to perform large-scale simulations for complex physical systems in various fields of engineering and science. Traditional ML-based methods, e.g., deep neural networks, show a remarkable ability to learn underlying principles from the data. One giant step in the applications of ML as a computational method for engineering problems was through the augmentation of physical laws into the machine learning formulation (loss function). While this was in the right direction, classical numerical methods have yet to be fully utilized in how ML-based techniques are developed.
The recent wave of ML research with engineering applications in mind is towards tight coupling of ML and classical numerical methods to retain some of the well-understood properties of numerical methods, increase reliability, and expand the scope of ML-based techniques. The new ML algorithms, both physics-based and data-driven, are being developed to be used in tandem with numerical methods to accelerate large-scale engineering design and discovery.
This symposium aims to highlight such developments in multiple fields of science and engineering that integrate machine learning with scientific computational methods. This symposium will also provide an opportunity for researchers, professionals, and students from academia and industry to meet and share cutting-edge developments through technical discussions. Potential topics may include but are not limited to, efforts on:
- Physics-informed / data-driven machine-learning models for forward and inverse modeling;
- Machine learning accelerated numerical simulations: Super-resolution of coarse-scale partial differential equation (PDE) solutions to obtain high-resolution solutions;
- Multiscale modeling and analysis using machine learning-based approaches;
- Multi-objective optimization strategies to accelerate network convergence;
- Physics-based feature extraction in machine learning: Towards the development of interpretable artificial intelligence models;
- Increasing the accuracy and reliability of machine learning techniques for applications in forward and optimization problems;
- Coupling neural networks with finite element methods.
- Judit Muñoz-Matute, Basque Center for Applied Mathematics (BCAM)
- David Pardo, The University of the Basque Country (UPV/EHU)
- Ignacio Muga, Pontificia Universidad Católica de Valparaíso (PUCV)
In the last decade, many deep-learning methods have been developed to approximate the solutions of Partial Differential Equations (PDEs) based on classical PDE approximation theories like variational, collocation, or spectral methods. The exponential growth of interest in this topic started with the so-called Physics Informed Neural Networks (PINN), followed by all its variants like variational PINNs (VPINN), hp-VPINNs, or RAR-PINNs. Other interesting methods include the Deep Ritz method, Deep Fourier method and Deep Petrov-Galerkin method. All these techniques have been successfully applied to solve several forward and inverse problems for PDEs.
However, the aforementioned methods also present some limitations, and it is proved that they often fail when attempting to solve general non-symmetric and non-coercive problems with low regularity data. In this case, the correct selection of trial and test spaces is critical in order to ensure the stability and approximability of the method. For that reason, deep learning methods based on residual minimization methods (DL-MINRES) are of great interest nowadays. The latter is a solid theory that ensures the discrete stability of the approximated solutions.
This minisymposium aims to accept contributions related to Deep Learning methods to approximate the solution of PDEs employing residual minimization and/or stabilization techniques. The scope of the minisymposium includes research topics related to:
- The Deep Learning version of general stabilized residual minimization methods like least-squares, first-order least squares or Petrov-Galerkin methods with optimal test functions.
- Stabilized PINNs, VPINNs or similar methods.
- DL-MINRES methods to solve: (a) linear, nonlinear or transient PDEs, (b) general systems of ODEs, (c) stochastic and parametric PDEs, (d) inverse problems, and (e) problems in both lower and higher dimensions.
- Works addressing integration and/or optimization problems in DL-MINRES.
- Convergence and stability analysis, (goal-oriented) adaptivity and error estimation of DL-MINRES methods.
- DL-MINRES methods combined with data-driven computing.
- Industrial applications of DL-MINRES methods.
- Vinod Kumar, UTEP
- Natasha Sharma, UTEP
IV. Reduced-Order Modeling for Fluids, Solids, and Structures
- Elnaz Seylabi, University of Nevada Reno
- Parisa Khodabakhshi, Lehigh University
Uncertainty quantification and optimization problems in large- and multi-scale engineering applications require running many computationally demanding forward simulations, making these problems practically intractable. Surrogate modeling is a viable solution for reducing the computational burden of these many-query applications, where a low-cost yet reasonably accurate model replaces the computationally expensive forward model to define a mapping between the input or design parameter space and the quantities of interest. In this regard, Machine Learning (ML) approaches have received significant attention in the past decade due to their versatility and flexibility. The idea behind ML methods is to develop a nonlinear mapping using training data to make predictions on the model for unseen scenarios. To optimize the underlying structure of the mapping and ensure fidelity, ML-based methods require access to a significant amount of data. Even then, the results may not be generalizable, limiting their applicability in most engineering applications where training data is scarce and its generation is computationally very demanding. On the other hand, scientific ML (SciML) methods aim to embed the physics of the underlying problem and utilize domain knowledge to help increase the reliability of the outputs of such physics-informed models with limited or no data. In this mini-symposium, we invite contributions on recent algorithmic advances and successful examples of utilizing the SciML methods and other relevant techniques in surrogate modeling of dynamical systems.
- Spencer Bryngelson, Georgia Institute of Technology
- Florian Schaefer, Georgia Institute of Technology
- Ali Mani, Stanford University
From turbulent flows to smart materials, multi-scale phenomena are a pervasive challenge in computational mechanics. Their efficient simulation rests on summarizing microscopic behavior and modeling its impact on macroscopic scales. This approach is usually referred to as homogenization or closures. Recently, these problems have been revisited from a statistical perspective, using ideas from data science and machine learning to represent distributions of microscopic states or learn coarse-grained models directly from data. The mini-symposium will feature recent developments in this exciting area at the interface of physics and computational data science.
- Bogdan I. Epureanu, University of Michigan-Ann Arbor
- Amin Ghadami, University of Southern California
Recent years have witnessed a significant shift towards far more complex and large-scale engineering systems than ever before. Despite remarkable increases in computational power, many real-world systems are still far too complex to simulate with digital twins based on high fidelity physics-based numerical methods. Model order reduction presents a promising way to tackle the computational bottleneck related to the computational intensity and model complexity. Nevertheless, such techniques face challenges when used on systems exhibiting a wide variety of parameter-dependent nonlinear behaviors or localized features. Recent advances in data-driven analysis of systems and machine learning approaches have revolutionized how we model engineering systems. Consequently, one can augment or optimize model reduction techniques through hybrid methods that combine data-driven learning processes with physics-based models to tackle previously unattainable challenges in modeling and analysis of complex engineering systems and structures. This symposium provides a platform to share the most recent developments on the integration of data-driven and physics-based models for model reduction across the fields. Recent advances in mechanistic model reduction techniques in computational fluids and structural mechanics, model reduction-based prediction and control, and novel reduced-order modeling methods are welcome.
- Data-assisted reduced-order modeling
- Reduced-order prediction
- Reduced-order models guided by machine learning
- Data-driven reduced-order control of fluids and structures
- Rajeev Jaiman, University of British Columbia
- Frederick Gosselin, Polytechnique Montréal
- Jasmin Jelovica, The University of British Columbia
- Wrik Mallik, The University of Glasgow, Scotland
This mini-symposium focuses on the data-driven model reduction and machine learning (ML) for fluid flow, structural dynamics and fluid-structure interaction. General-purpose black-box ML techniques are computationally efficient and scalable but may not perform well beyond the data they are being trained on and they lack physical interpretability. Moreover, purely data-driven approaches ignore the benefits and capabilities of traditional numerical methods. To address these challenges, contributions and new achievements in algorithm design and software development for hybrid physics-based ML (PBML) techniques are solicited. Critical assessment and improving conventional ML techniques to develop accurate and robust surrogate models and the integration of model order reduction (MOR) with high-fidelity simulations and their application for real-world problems are appreciated. This mini-symposium aims to provide a platform for investigators from engineering, physics, mathematics and other backgrounds to disseminate and discuss PBML and data-driven MOR techniques for prediction, analysis and design, especially in the context of digital twins and various applications.
- Yannis Georgiou, National Technical University of Athens
- Yannis Georgiou, Purdue University
The scope of this mini symposium is to exchange ideas among the participants for the systematic exploitation of datasets furnished by multi-physics sensors installed aboard ocean and aerospace platforms for monitoring-as representatives of complicated infrastructure with many local critical areas-propulsion performance and structural health monitoring. Effective exploitation and physics interpretation can be achieved by reduction of redundancy which reveals the properties of motion and at the same time it detects anomalies. Also the computational paradigm of machine rearming bears transformative potential for the use of multi physics datasets as experiences for learning and therefore for prediction purposes. The main underling challenge is to learn a physical complex system on the basis of either the collected datasets via multi physics sensors during operation and enhance it by exiting the system properly to reveal its coupled dynamics to a sensor network. The theoretical challenges are data order reduction that admits physics interpretation in terms of the invariants of the motion, slow and inertial manifolds, and use of this reduced data to build a new class of artificial networks restricted by the physics in the reduced coupled dynamics but at the same time will be able to create new physics information from multi-physics physical sensors installed in the complicated platform. Works addressing the above issues and many other relevant ones are welcome.
V. MMLDT in STEM Education
- Vinod Kumar, UTEP
- Jack Chessa, UTEP
VI. Digital Twins and Mechanistic: Data Science in Additive Manufacturing
- Elise Walker, Sandia National Laboratories
- Troy Shilt, Sandia National Laboratories
- Jonas Actor, Sandia National Laboratories
With the synthesis of new high-throughput methods, materials R&D is readying for the discovery, characterization, and design of robust materials and manufacturing processes through the development and implementation of multimodal, physics-informed machine learning algorithms. The fusion of human expert materials knowledge with multimodal, physically constrained, machine learning algorithms can aid in detection of "fingerprints" critical in materials behavior, prognose component performance, and adapt manufacturing strategies.
This minisymposium convenes world-class researchers in advanced manufacturing, materials characterization, data science, modeling/simulation, and hardware engineering to showcase works that detect critical features that govern material behavior. Researchers from national labs, academia, and industry will present and discuss topics such as hybrid, physics—informed machine learning methods to understand process-structure mappings, surrogate models using multimodal data streams combining experiments and simulations, and machine learning guided process optimization.
- Frank Medina, UTEP
- Zhengtao Gan, UTEP
- Eric MacDonald, UTEP
VII. Digital Thread in Product Lifestyle
- Pierre Jehel, Université Paris-Saclay, CentraleSupélec, ENS Paris-Saclay, CNRS, LMPS - Laboratoire de Mécanique Paris-Saclay
- Judicaël Dehotin, SNCF Réseau, Direction Générale Industrielle & Ingénierie
- Filippo Gatti, Université Paris-Saclay, CentraleSupélec, ENS Paris-Saclay, CNRS, LMPS - Laboratoire de Mécanique Paris-Saclay
- Stéphane Vialle, PhD, Université Paris-Saclay, CentraleSupélec, CNRS, LISN - Laboratoire interdisciplinaire des sciences du numérique
- Céline Hudelot, PhD, Université Paris-Saclay, CentraleSupélec, MICS - Mathématiques et Informatique pour la Complexité et les Systèmes
The minisymposium is organized by a multidisciplinary team of researchers and practitioner who are part of the Minerve research project for developing methodologies and digital tools for collaboration and digital continuity over the lifecycle of railroad infrastructures. Minerve is a public-private $35 million-project gathering 4 industrial partners (SNCF, RATP, Colas Rail, and Kayrros), 1 private institute for experimental research (IREX), and 1 academic institution (CentraleSupélec | Université Paris-Saclay). It is supported by the French government in the framework of the Recovery Plan and of the Investing for the Future program.
Railroad infrastructure management requires the interaction of many disciplines and fields of expertise to guarantee the level of performance needed for effective railroad operation as well as for reaching the net zero carbon emissions objectives set by states like France for instance. A systemic approach needs to be adopted where tracks, bridges, tunnels, earthworks, the electrification and signaling systems are all integrated in the railroad performance assessment.
Railroad systems span over a wide area. In France, the double-track railroad is 27,000 km long. Consequently, railroads are exposed to diverse natural hazards that have been increasing because of climate change effects: flood, fire, wind, scour, heat waves. Conversely, railroads expose the surrounding human and animal populations to hazards such as ground vibrations. Also, the lifecycle of the railroad infrastructure needs to be integrated in the evaluation of its performance, even for components such as bridges, tunnels, earthworks that are designed to be maintained in service for several decades and consequently with often little information about how they will change over time. Moreover, the global climate context adds additional constraints in reducing the carbon footprint and improving environmental performance. Finally, a railroad infrastructure can offer services to its environment because it has the potential to host animals and plants. Those are some of the issues that need to be addressed when assessing the performance of a railroad infrastructure.
Cognitive digital twins provide actual and forecasted information to help managers with making informed decisions for maintaining the performance of a system of assets. Besides storing and representing the available data about the system, cognitive digital twins integrate computational capabilities for learning from the data along with knowledge representations for understanding user requests and eventually answering them, thus generating a digital thread all over the asset lifecycle.
This minisymposium calls for contributions in a wide range of disciplines from computer science through data science to computational mechanics. Technical papers, case studies, literature reviews can be presented as long as they address at least one of the following research directions:
- Railroad infrastructure data collection, storage, and retrieval. This includes designing database architectures, populating along with cleaning large heterogeneous datasets.
- Physics-based computational simulations for assessing the reliability of critical railroad infrastructures, especially in the presence of uncertainties.
- Data-based modeling of the behavior of railroad infrastructure components in given critical situations. For instance, machine learning approaches for speeding physics-based simulations while providing digital twins with learning capabilities
- Knowledge acquisition towards establishing the semantics of railroad infrastructure system management. This includes conducting and synthesizing interviews with experts in various disciplines as well as the development and management of ontologies.
- System engineering and architecture applied to railroad domain, including environment and climate assessment.
- Francisco Chinesta, ENSAM, France
- Elias Cueto, Universidad de Zaragoza, Espain
- Ron Kenett, kpa, Israel
- Itai Dattner, Haifa University, Israel
Digital Twins combining physics based models efficiently manipulated by using advanced model order reductionb techniques, and data-driven models making use of advanced machine learning technologies, become major protagonists of the XXI century science and technology, in particular in smart industry, smart cities and smart nations. This MS well address major advanced at these three scales, the one of products and industry, and the ones concerning complex systems of systems (city and nation), involving critical components and systems.
- Chris McComb, Carnegie Mellon University
- John Cagem, Carnegie Mellon University
- Rebecca Taylor, Carnegie Mellon University
- Conrad Tucker, Carnegie Mellon University
VIII. Skills for Digital Workforce
- Dongjin Lee, University of California San Diego
- Boris Kramer, University of California San Diego
- Paromita Nath, Rowan University
For digital twins of complex engineering systems, risk and reliability estimation is essential from the early design stage to operation and decommissioning. For instance, accurate estimates of risk and reliability can assist in optimal design, maintenance scheduling, and effective operation. Physical twins are subject to many uncertainties, and this compounds for the digital twin where modeling assumptions have to be made, and model parameters have to be chosen. These uncertainties stem from loads, material properties, manufacturing processes, or operational environments. Methods for estimation of risk or reliability analysis quantify such uncertainties to capture the risk associated with failure events at the required timescales for optimal operational decisions.
Despite the benefits of risk quantification in real-time, many existing risk and reliability methods have been employed typically in the design stage rather than in real-time assessment during operation. This is mainly due to the following challenges: For a real-time digital twin, risk or reliability analysis methods need to (1) handle incomplete or noisy data and big data, (2) be equipped with predictive models and uncertainty estimates of degradation and failure processes, affected by environmental conditions, operating strategies, and external influencing factors, and (3) employ computationally efficient surrogate models to be coupled with other models, conflict resolution mechanisms, and operation. This minisymposium will look at various digital twin applications in manufacturing, biomedical, or civil infrastructure and will present a broad class of novel risk or reliability analysis methods employed for digital twins to address the above-mentioned challenges.
IX. Geosystems, Geostatics
- Shabnam Semnani, UC San Diego
- JS Chen, UC San Diego
Many of today’s societal needs such as mitigation of natural hazards, energy and environmental sustainability and accessing natural resources require studying the physical properties and processes of the Earth and geophysical systems across all scales from both scientific and technological perspectives. Geomaterials are heterogeneous and multi-phase porous materials; therefore, advanced multi-scale techniques are needed to fully capture the complex behavior of geomaterials in geosystems and geoengineering applications. These multi-scale techniques often require information regarding the constituents and microstructure of geomaterials, which can be obtained from various imaging techniques. This mini-symposium aims to provide a forum to discuss recent advances in applications of artificial intelligence (AI), machine learning (ML), and data-driven and data-centric methods to enhance modeling of geomaterials, geophysical systems, and imaging technologies across scales. The topics of interest include, but are not limited to: 1) Multi-scale fluid flow or mechanistic simulations; 2) Data-driven modeling of geophysical systems across scales; 3) Data analytics in geosystems application, e.g. image processing, feature identification, and data fusion; 4) Dimensionality reduction methods; 5) Geohazards prediction and assessment; and 6) Data-driven inverse modeling.
X. Mechanistic Based Machine Learning for Autonomous Systems
- Xiangyun Long, Hunan University
Large-scale and high-end complex equipment such as launch vehicles, on-orbit satellites, alien detectors, nuclear power systems and so on often operate in special and extreme environments like vacuum and irradiation. The increase in the complexity of the service environment will affect the performance of the equipment and the supporting health monitoring. And the demands on maintaining the system are also increasing. Unmanned service conditions put forward intelligent requirements for equipment fatigue damage status and assessment. Therefore, it is necessary to build a fatigue digital twin system for complex equipment to meet the needs of unmanned, intelligent, and real-time response for health monitoring. Establish a fatigue digital twin model for complex equipment such as rockets and satellites. First, intelligently perceive fatigue damage in special service environments. Then, construct a physics-information fusion model. Afterward, conduct a real-time damage assessment. And finally, perform load optimization during equipment service. It is of great significance to carry out fatigue analysis of complex equipment in special operating environments such as rockets, in-orbit satellites, and nuclear power systems.
The purpose of this proposal is to communicate the establishment of a fatigue digital twin system based on deep learning. Considering the multi-source information perception of fatigue damage parameters. Establish a fatigue damage model based on physics-information fusion. Propose a fatigue damage neural network for real-time prediction based on deep convolution. Conduct load optimization and make quick decisions. And integrate the fatigue digital twin cloud-edge coordination system and apply it to the real-time health monitoring of large high-end equipment in special and extreme environments. Therefore, the proposal has the following sub-objectives:
- Intelligent sensing of fatigue damage parameters considering multi-source information. Develop a deep learning intelligent fatigue damage identification and measurement method based on multi-source information. The length of the fatigue crack, the displacement field, the stress field, and the strain field can be measured automatically.
- Fatigue damage model based on physics-information fusion. Develop a deep-learning intelligent fatigue damage model based on physics-information fusion for a small sample problem. Construct a deep learning model that satisfies the perceptual information and the physical prior knowledge simultaneously.
- Real-time prediction method for fatigue damage based on deep convolutional neural network. Aiming at the real-time problem of fatigue damage analysis in online service, develop a physics-information neural network to evaluate the damage status and the residual life in real-time.
- Load optimization and real-time decisions According to the results of the above damage assessment, intelligently adjust the load for critical components to avoid fatigue damage and make real-time decisions to extend damage residual life.
In the above four parts, data and models are transmitted through the fatigue digital twin cloud-edge collaborative system, and finally, the fatigue digital twin system based on physics-information fusion is integrated. Build the fatigue damage digital twin framework based on deep learning. Conduct real-time data transmission and rapid physical model correction by neural network and cloud-edge collaborative technology. Integrate intelligent sensing, physics-information fusion for damage, intelligent damage assessment, and intelligent load optimization for the fatigue digital twin system. This proposal can provide a new diagram for intelligent unmanned identification and health monitoring of damage in critical components of complex equipment.
- Marco Tezzele, University of Texas at Austin
- Ionut Farcas, University of Texas at Austin
- Gianluigi Rozza, International School of Advanced Studies
Reduced order models (ROMs) and data-driven methods are fundamental tools to allow fast and reliable numerical predictions in computational science and engineering applications. This is especially true for outer-loop scenarios such as optimization and uncertainty quantification, where usually a large number of parameter evaluations are needed.
Nevertheless, the construction of robust and accurate data-driven ROMs for complex large-scale engineering applications is still a challenging task. These simulations are often computationally expensive and therefore characterized by a relatively small amount of available data. To this end, scientific machine learning has emerged as a reliable tool able to enhance data-driven models with domain knowledge, physical principles, and artificial intelligence. Still, many research directions remain open, such as ROMs’ stability, interpretability of nonlinear techniques based on artificial neural networks, and data fusion from models with heterogeneous fidelities. The aim of this mini-symposium is to present recent computational strategies to improve the construction of data-driven ROMs for large-scale simulations and to foster the discussion about the future challenges that need to be addressed by the community.