Kinematics And Dynamics Of Machine Martin Solution 44
Fueled by breakthrough technology developments, the biological, biomedical, and behavioral sciences are now collecting more data than ever before. There is a critical need for time- and cost-efficient strategies to analyze and interpret these data to advance human health. The recent rise of machine learning as a powerful technique to integrate multimodality, multifidelity data, and reveal correlations between intertwined phenomena presents a special opportunity in this regard. However, machine learning alone ignores the fundamental laws of physics and can result in ill-posed problems or non-physical solutions. Multiscale modeling is a successful strategy to integrate multiscale, multiphysics data and uncover mechanisms that explain the emergence of function. However, multiscale modeling alone often fails to efficiently combine large datasets from different sources and different levels of resolution. Here we demonstrate that machine learning and multiscale modeling can naturally complement each other to create robust predictive models that integrate the underlying physics to manage ill-posed problems and explore massive design spaces. We review the current literature, highlight applications and opportunities, address open questions, and discuss potential challenges and limitations in four overarching topical areas: ordinary differential equations, partial differential equations, data-driven approaches, and theory-driven approaches. Towards these goals, we leverage expertise in applied mathematics, computer science, computational biology, biophysics, biomechanics, engineering mechanics, experimentation, and medicine. Our multidisciplinary perspective suggests that integrating machine learning and multiscale modeling can provide new insights into disease mechanisms, help identify new targets and treatment strategies, and inform decision making for the benefit of human health.
Kinematics And Dynamics Of Machine Martin Solution 44
Toward this goal, the main objective of machine learning is to identify correlations among big data. The focus in the biology, biomedicine, and behavioral sciences is currently shifting from solving forward problems based on sparse data towards solving inverse problems to explain large datasets.23 Today, multiscale simulations in the biological, biomedical, and behavioral sciences seek to infer the behavior of the system, assuming that we have access to massive amounts of data, while the governing equations and their parameters are not precisely known.24,25,26 This is where machine learning becomes critical: machine learning allows us to systematically preprocess massive amounts of data, integrate and analyze it from different input modalities and different levels of fidelity, identify correlations, and infer the dynamics of the overall system. Similarly, we can use machine learning to quantify the agreement of correlations, for example by comparing computationally simulated and experimentally measured features across multiple scales using Bayesian inference and uncertainty quantification.27
Figure 1 illustrates the integration of machine learning and multiscale modeling on the parameter level by constraining their spaces, identifying values, and analyzing their sensitivity, and on the system level by exploiting the underlying physics, constraining design spaces, and identifying system dynamics. Machine learning provides the appropriate tools for supplementing training data, preventing overfitting, managing ill-posed problems, creating surrogate models, and quantifying uncertainty. Multiscale modeling integrates the underlying physics for identifying relevant features, exploring their interaction, elucidating mechanisms, bridging scales, and understanding the emergence of function. We have structured this review around four distinct but overlapping methodological areas: ordinary and partial differential equations, and data and theory driven machine learning. These four themes roughly map into the four corners of the data-physics space, where the amount of available data increases from top to bottom and physical knowledge increases from left to right. For each area, we identify challenges, open questions, and opportunities, and highlight various examples from the life sciences. For convenience, we summarize the most important terms and technologies associated with machine learning with examples from multiscale modeling in Box 1. We envision that our article will spark discussion and inspire scientists in the fields of machine learning and multiscale modeling to join forces towards creating predictive tools to reliably and robustly predict biological, biomedical, and behavioral systems for the benefit of human health.
A major challenge in the biological, biomedical, and behavioral sciences is to understand systems for which the underlying data are incomplete and the physics are not yet fully understood. In other words, with a complete set of high-resolution data, we could apply machine learning to explore design spaces and identify correlations; with a validated and calibrated set of physics equations and material parameters, we could apply multiscale modeling to predict system dynamics and identify causality. By integrating machine learning and multiscale modeling we can leverage the potential of both, with the ultimate goal of providing quantitative predictive insight into biological systems. Figure 2 illustrates how we could integrate machine learning and multiscale modeling to better understand the cardiac system.
Multiscale modeling can teach machine learning how to exploit the underlying physics described by, e.g., the ordinary differential equations of cellular electrophysiology and the partial differential equations of electro-mechanical coupling, and constrain the design spaces; machine learning can teach multiscale modeling how to identify parameter values, e.g., the gating variables that govern local ion channel dynamics, and identify system dynamics, e.g., the anisotropic signal propagation that governs global diffusion. This natural synergy presents new challenges and opportunities in the biological, biomedical, and behavioral sciences.
The interaction between the different scales, from the cell to the tissue and organ levels, is generally complex and involves temporally and spatially varying fields with many unknown parameters.42 Prior physics-based information in the form of partial differential equations, boundary conditions, and constraints can regularize a machine learning approach in such a way that it can robustly learn from small and noisy data that evolve in time and space. Gaussian processes and neural networks have proven particularly powerful in this regard.43,44,45 For Gaussian process regression, the partial differential equation is encoded in an informative function prior;46 for deep neural networks, the partial differential equation induces a new neural network coupled to the standard uninformed data-driven neural network,22 see Fig. 3. This coupling of data and partial differential equations into a deep neural network presents itself as an approach to impose physics as a constraint on the expressive power of the latter. New theory driven approaches are required to extend this approach to stochastic partial differential equations using generative adversarial networks, for fractional partial differential equations in systems with memory using high-order discrete formulas, and for coupled systems of partial differential equations in multiscale multiphysics modeling. Multiscale modeling is a critical step, since biological systems typically possess a hierarchy of structure, mechanical properties, and function across the spatial and temporal scales. Over the past decade, modeling multiscale phenomena has been a major point of attention, which has advanced detailed deterministic models and their coupling across scales.13 Recently, machine learning has permeated into the multiscale modeling of hierarchical engineering materials3,44,47,48 and into the solution of high-dimensional partial differential equations with deep learning methods.34,43,49,50,51,52,53 Uncertainty quantification in material properties is also gaining relevance,54 with examples of Bayesian model selection to calibrate strain energy functions55,56 and uncertainty propagation with Gaussian processes of nonlinear mechanical systems.57,58,59 These trends for non-biological systems point towards immediate opportunities for integrating machine learning and multiscale modeling in the biological, biomedical, and behavioral sciences and opens new perspectives that are unique to the living nature of biological systems.
The basic idea of theory-driven machine learning is, given a physics-based ordinary or partial differential equation, how can we leverage structured physical laws and mechanistic models as informative prior information in a machine learning pipeline towards advancing modeling capabilities and expediting multiscale simulations? Figure 5 illustrates the integration of theory-driven machine learning and multiscale modeling to accelerate model- and data-driven discovery. Historically, we have solved this problem using dynamic programing and variational methods. Both are extremely powerful when we know the physics of the problem and can constrain the parameters space to reproduce experimental observations. However, when the underlying physics are unknown, or there is uncertainty about their form, we can adapt machine learning techniques that learn the underlying system dynamics. Theory-driven machine learning allows us to seamlessly integrate physics-based models at multiple temporal and spatial scales. For example, multifidelity techniques can combine coarse measurements and reduced order models to significantly accelerate the prediction of expensive experiments and large-scale computations.29,69 In drug development, for example, we can leverage theory-driven machine learning techniques to integrate information across ten orders of magnitude in space and time towards developing interpretable classifiers to characterize the pro-arrhythmic potential of drugs.70 Specifically, we can employ Gaussian process regression to effectively explore the interplay between drug concentration and drug toxicity using coarse, low-cost models, anchored by a few, judiciously selected, high-resolution simulations.27 Theory-driven machine learning techniques can also leverage probabilistic formulations to inform the judicious acquisition of new data and actively expedite tasks such as exploring massive design spaces or identifying system dynamics. For example, we could devise an effective data acquisition policy for choosing the most informative mesoscopic simulations that need to be performed to recover detailed constitutive laws as appropriate closures for macroscopic models of complex fluids.71 More recently, efforts have been made to directly bake-in theory into machine learning practice. This enables the construction of predictive models that adhere to the underlying physical principles, including conservation, symmetry, or invariance, while remaining robust even when the observed data are very limited. For example, a recent model only utilized conservation laws of reaction to model the metabolism of a cell. While the exact functional forms of the rate laws was unknown, the equations were solved using machine learning.72 An intriguing implication is related to their ability to leverage auxiliary observations to infer quantities of interest that are difficult to measure in practice.22 Another example includes the use of neural networks constrained by physics to infer the arterial blood pressure directly and non-invasively from four-dimensional magnetic resonance images of blood velocities and arterial wall displacements by leveraging the known dynamic correlations induced by first principles in fluid and solid mechanics.11 In personalized medicine, we can use theory-driven machine learning to classify patients into specific treatment regimens. While this is typically done by genome profiling alone, models that supplement the training data using simulations based on biological or physical principles can have greater classification power than models built on observed data alone. For the examples of radiation impact on cells and Boolean cancer modeling, a recent study has shown that, for small training datasets, simulation-based kernel methods that use approximate simulations to build a kernel improve the downstream machine learning performance and are superior over standard no-prior-knowledge machine learning techniques.73