Abstract
In the early-stage development of sheet metal parts, key design properties of new structures must be specified. As these decisions are made under significant uncertainty regarding drawing configuration changes, they sometimes result in the development of new parts that, at a later design stage, will not be drawable. As a result, there is a need to increase the certainty of experience-driven drawing configuration decisions. Complementing this process with a global sensitivity analysis (GSA) can provide insight into the impact of various changes in drawing configurations on drawability, unveiling cost-effective strategies to ensure the drawability of new parts. However, when quantitative global sensitivity approaches, such as Sobol's method, are utilized, the computational requirements for obtaining Sobol indices can become prohibitive even for small application problems. To circumvent computational limitations, we evaluate the applicability of different surrogate models engaged in computing global design variable sensitivities for the drawability assessment of a deep-drawn component. Here, we show in an exemplary application problem, that both a standard Gaussian process regression (GPR) model and an ensemble model can provide commendable results at a fraction of the computational cost. We compare our surrogate models to existing approaches in the field. Furthermore, by comparing drawability measures we show that the error introduced by the surrogate models is of the same order of magnitude as that from the choice of drawability measure. In consequence, our surrogate models can improve the cost-effective development of a component in the early design phase.
1 Introduction
Deep drawing is a widely used technique for the forming of sheet metal components. Its application extends to the production of various components, such as structures or body parts, particularly within the automotive industry. The deep drawing process introduces various parameters that influence the manufacturability of the component, which may be interdependent with functional requirements on the component itself. As manufacturability must be ensured throughout the design process of a component, it requires frequent evaluation during the early design stages.
To avoid the high cost of repetitive physical experiments, numerical methods, such as finite element methods (FEM), have been developed from the 1960s. An early overview of the available methods is given by Makinouchi [1]. The more recent development of simulation methods for sheet metal forming is reviewed in Ref. [2].
As the increase in available computational capacity is outpaced by the increase in the level of detail and complexity in sheet metal forming simulation models, other numerical methods such as sensitivity analysis (SA) (e.g., Refs. [3] or [4]) or metamodeling are becoming more and more important. The former can be defined as determining the relative impact of different input variables on the total output variability. As such, SA can be an essential tool for reducing the number of design parameters but also for revealing cost-effective measures, including design changes to improve the functionality and drawability of parts [5].
In general, SA methods are divided into local and global methods [3]. Local methods are generally based on the approximation of partial derivatives around a fixed design point. This locality of the results and the missing possibility to estimate the input parameter interaction are the main drawbacks of local SA measures. Global methods, on the other hand, can remedy this, but usually come at significantly higher computational costs. Yet, they suit the early design stage, as the components assessed here are more dynamically evolving, rendering local sensitivities limited in their validity. GSA includes a variety of methods, from more qualitative screening methods, such as the one proposed in Ref. [6], to quantitative methods based on the variance decomposition of the function in question (e.g., Ref. [7]).
SA methods have been widely applied in sheet metal forming applications. The influence of process parameters on the drawability of a stainless steel cup is investigated by Padmanabhan et al. [8]. The influence of different shape parameters of the initial blank sheet was shown by Sattari et al. [9]. In these works, mainly qualitative methods are used due to computational requirements for variance decomposition methods. Originating from other fields of application [10,11], the idea of introducing a surrogate model before calculating global sensitivity measures was also successfully investigated and applied to sheet metal forming. While focusing on SA methods and their advances, Fruth [12] showed that a GPR surrogate model can be used to significantly reduce the computation times for such SA procedures. Lehrer et al. [13] compared sensitivities for one-step and multistep simulation schemes in deep drawing.
The contribution of the present work is twofold. First, we evaluate the applicability of different surrogate models for calculating global design variable sensitivities in assessing the drawability of a deep-drawn component. By this, we circumvent the limitation of otherwise infeasible computational budget requirements. Surrogate models used include a popular GPR approach as well as a customized machine learning approach proposed by Lehrer et al. [14] that has not been previously used in sensitivity analysis. Additionally, we include two common GSA libraries from the literature [15,16]. To validate result quality, all surrogate-based results are compared to traditional simulation-based results. Second, we investigate a recently proposed drawability metric [14] in the context of global sensitivity analysis and, as has rarely been done in previous works, illustrate the comparability with well-established metrics.
The structure of this paper is as follows: The design of experiments (DoE) [17] and the SA methods utilized in the present work are summarized in Secs. 2 and 3, respectively. Subsequently, the application problem is presented in Sec. 4 with a focus on the simulation model, design parameters, and drawability metrics compared here. The metamodels used for comparison to a standard global SA based on finite element simulations are introduced in Sec. 5. The results are presented and discussed in Sec. 6, followed by our conclusions and an outlook on potential future work (Sec. 7).
2 Design of Experiments
One of the essential initial steps in sensitivity analysis and the fitting of surrogate models is conducting a DoE [18]. Assuming little prior knowledge about the investigated function, that is, a black-box problem, the aim in both procedures is to achieve good coverage of the design space. Because there is no unique mathematical definition for this aim and thus no unique optimal way to distribute samples, DoE is still an active field of research. In the present study, we use Latin hypercube sampling and a so-called Sobol sampling scheme from the class of quasi-Monte Carlo (QMC) methods.
Monte Carlo sampling is very popular in a number of applications due to the simplicity of the approach. Samples are drawn pseudo-randomly from the design space. However, samples are generally not uniformly distributed in the design space. As a remedy, so-called QMC methods were developed, which are usually based on low-discrepancy sequences of prime numbers. Popular examples include Hammersley, Halton, Faure, and Sobol sequences (see, for example, Ref. [19] for more details). In the present work, the latter is used for sensitivity analysis. Furthermore, the modifications suggested by Saltelli [20] are applied to reduce the number of required samples in SA. Readers are referred to the original publication for more details of the modifications.
For fitting our surrogate models, we employ optimal Latin hypercube sampling (OLHS). A Latin hypercube design (LHD) [21] is commonly constructed as follows: A total of N samples in d dimensions is being selected. Each dimension of the design space is divided into N bins of equal probability, dividing the total design space into Nd cells. N cells are then randomly selected so that each bin in each dimension only contains a single selected cell. Within each selected cell, one sample is placed in the center or is randomly located [22]. The former case is applied here.
One of the most significant improvements to LHS that reduces common problems such as correlations is OLHS. The idea is to incrementally improve an LHD with respect to a space-filling criterion. In the present work, a simulated annealing algorithm is utilized, as suggested by Morris et al. [23]. It consists of random pairwise and coordinatewise swaps between samples. Newly generated samples are then always accepted if they improve the space-filling criterion, and they are accepted with a certain probability if they do not offer an improvement. Other optimization approaches for OLH have been suggested, for example, by Ye et al. [24] or Jin et al. [25].
In the present work, an in-house Python implementation of OLH is used to generate the DoE used for the surrogate models. This implements our assumption, that we start from a comparably small existing number of conducted simulations in the early design phase. The DoE for sensitivity analysis is based on the implementation of the Sobol sequence within the Python library SALib [26].
3 Sensitivity Analysis and Uncertainty Quantification
The sensitivity analysis methods used in this work are summarized in the following (see Ref. [3] for further details).
target function evaluations are required [20]. Here, N can be specified as the desired number of samples. d denotes the number of design variables.
4 Application Case
Deep drawing is a forming process that involves the conversion of flat sheet metal into three-dimensional shapes. This is achieved by drawing a blank into a die with the help of a punch. High forming energies, especially for large steel geometries, are induced by a press. Related mechanical influences on the process include plastic hardening of the blank, lubrication between the tools and the blank, shape of the deep-drawn part, and the local restraint of material flow into the die. Deep drawing is a fast and cost-efficient process when large quantities are needed.
Explicit FEM models are state-of-the-art for simulating deep drawing [27]. A rigid punch mesh is moved over time to enforce deformation on the blank via contacts. Analytical drawbead and blankholder forces affect material draw-in into the die. The material properties here are represented by a transverse anisotropic elasto-plastic material model with the Hill48 yield criterion [28], using Hockett-Sherby hardening [29]. The simulation scheme is computationally costly, as the maximum time-step is restricted by the Courant-Friedrichs-Lewy condition [30]. Therefore, mass scaling is applied to increase the allowable maximum time-step size at the cost of nonphysical added mass. Moreover, linear shape functions are employed for the deformable Belytschko-Tsay shell elements of the blank. Reduced integration is used to further reduce computational costs. Additionally, the adaptive remeshing technique allows for a fine mesh in areas of high deformation, whereas other regions are meshed more coarsely. These efforts result in simulation times of roughly 10 min1 for our cross-die example. This underlines the need for a further reduction in the evaluation time to compute SIs, as it involves at least hundreds to thousands of simulation runs. An exemplary setup of an FEM deep drawing model using LS-DYNA is given in Ref. [31]. Our cross-die simulation model is shown in Fig. 1. Supplementary simulation parameters are given in Table 2 in Appendix A.
Symbol | Parameter range | Parameter |
---|---|---|
t | (0.8–2.0) mm | Sheet thickness |
r | (0.8–2.5) | Lankford |
coefficient | ||
σy | (140.0–180.0) MPa | Yield strength of the blank |
(130.0–190.0) kN | Blankholder force | |
(0.05–0.12) | Dynamic coefficient of friction |
Symbol | Parameter range | Parameter |
---|---|---|
t | (0.8–2.0) mm | Sheet thickness |
r | (0.8–2.5) | Lankford |
coefficient | ||
σy | (140.0–180.0) MPa | Yield strength of the blank |
(130.0–190.0) kN | Blankholder force | |
(0.05–0.12) | Dynamic coefficient of friction |
This simulation scheme focuses on the development stage in which a first iteration of the tooling geometry has been designed. Readers interested in SA at an earlier design stage can opt for a one-step simulation scheme at the cost of less accurate results [13].
We select five parameters that include geometry, material, and process parameters for which we calculate SIs. These are listed in Table 2. Here, all parameters are assumed to follow uniform distributions.
Symbol | Value | Parameter |
---|---|---|
290 mm | Cross-die length | |
120 mm | Cross-die width | |
29 mm | Cross-die depth | |
200 N | Analytical drawbead forces | |
E | 205 GPa | Young's modulus |
ν | 0.3 | Poisson ratio |
ρ | Mass density | |
(588.0–628.0) MPa | Saturation stress | |
N | (0.82–0.49) | Material constant |
p | (0.70–0.64) | Material constant |
Symbol | Value | Parameter |
---|---|---|
290 mm | Cross-die length | |
120 mm | Cross-die width | |
29 mm | Cross-die depth | |
200 N | Analytical drawbead forces | |
E | 205 GPa | Young's modulus |
ν | 0.3 | Poisson ratio |
ρ | Mass density | |
(588.0–628.0) MPa | Saturation stress | |
N | (0.82–0.49) | Material constant |
p | (0.70–0.64) | Material constant |
The hardening parameters vary due to adaptations regarding to ensure hardening-only curves.
Subsequently, the calculated blank deformations are evaluated. In general, the postprocessing is focused on inspecting the structural integrity of the drawn geometry (e.g., severe thinning and cracks), the determination of a press capable of applying the necessary restraining forces, and evaluating wrinkling to assess the feasibility of the process.
There are several methods to assess the drawability of complex shapes. Failure models, such as the generalized incremental stress-state-dependent damage model (GISSMO) [32], accumulate damage by an incremental formulation until failure. Another common practice is to investigate the deformation of the blank with a forming limit diagram (FLD). Here, regions of cracks and wrinkles can be confined according to the major and minor principal Hencky strains.
where N represents the number of elements in the simulation model, h0 is the initial thickness of the blank sheet, and are the element thicknesses at the end of the drawing simulation.
Here, represents the ratio between the surface area of an element and the drawn blank. is the minimum Euclidean distance of an element to the corresponding limit curve, n denotes the total number of elements, and w the weights. Depending on the subscript, the elements are associated with cracks c, wrinkles w, or good g (drawable) regions. To account for the influence of wrinkling, we assign a weight of . For scaling purposes, we set for good points. The values of the factors are chosen as suggested in the original publication [14] due to the similarity of the application problems.
5 Deep Drawing Metamodels
In this section, we introduce the surrogate modeling approaches used in this study. First, we summarize GPR by introducing the most relevant concepts. Second, we introduce an ensemble method based on a machine learning architecture recently proposed by Lehrer et al. [14].
5.1 Gaussian Process Regression.
where is an unknown constant and is a stationary random process. Additionally, a sample dataset is required, denoted here as . It consists of m samples with input data and the output of the respective function .
where r is the correlation vector between the sample data and the new point, represents the correlation matrix between the sample data points and a column vector filled with ones.
One potential advantage in the estimation of SIs using a GPR surrogate model is that it can provide prediction uncertainties together with the prediction. While this is not salvaged in the present work, also for ease of comparison with other methods, we believe it to be interesting to investigate how to include these uncertainties into SI estimation in future work.
Here, Gaussian process regression and kernel implementations of the scikit-learn library in Python are used [37].
5.2 Ensemble.
The same dataset as in Sec. 5.1 is used. Here, an instance of the set is with the features of the input space and the target value of the output space (supervised learning). To establish a surrogate model for the multivariate single-objective deep drawing problem, we use the ensemble method of stacked generalization (stacking), first proposed by Wolpert [43]. Here, first-level learners L1 are trained on the features of the dataset. The second-level learner L2 is then trained on the predictions of the first-level learners.
Stacking offers two distinct advantages. First, it enhances the accuracy and robustness of the final predictions by leveraging an ensemble of base estimators instead of relying on an individual model. Second, stacking exhibits greater flexibility in combining first-level predictions compared to methods such as averaging or voting, thereby improving accuracy.
To improve the prediction performance of the stacking surrogate, first-level learners should be diverse. This is achieved through two measures. First, the first-level learners are trained with a set of n different base regression algorithms A (see Fig. 2). Second, these algorithms are used to establish a set of bootstrap aggregation (bagging) estimators [44] trained in k-fold subsets of the dataset. This follows the combination approach proposed by Wolpert et al. [45]. A comprehensive overview of the ensemble methods can be found in [46]. Additionally, Mendes-Moreira et al. [47] cover regression ensembles.
which incorporates a number of base algorithms nA and a number of bagging models per base algorithm nB. As its weights are adjusted through least squares minimization, the addition of more first-level learners introduces additional terms with weights to be optimized. As long as a new first-level learner does not contribute new information, its weight is minimized. Still, by adding a further term, even if the remaining weight is close to zero, an additional residual error will be propagated to the final prediction y, leading to error accumulation. In either scenario, the introduction of more first-level learners increases the complexity of optimizing the second-level learner, rendering it susceptible to suboptimal results. Due to these considerations, a balance between the number of first-level learners (quantity) and their information contribution (quality) should be sought.
We use the implementations of scikit-learn [37] for the configuration of the ensemble model. Its architecture is shown in Fig. 2.
Ten base algorithms are employed, namely, lasso, elastic net, Gaussian process, decision tree, k nearest neighbors, stochastic gradient descent, multilayer perceptron, adaptive boosting, and support vector regression. The number of first-level learners per base algorithm is set during hyperparameter optimization. The second-level learner here is also chosen by hyperparameter optimization from a set of estimators. These are Ridge, Bayesian Ridge, and linear regression with non-negativity constraints. The ensemble's hyperparameters are optimized with a grid search optimization, during which the contributions of the first-level learners are adjusted as well. Five-fold cross-validation is conducted to mitigate the effects of overfitting.
Alternative architectures with ensemble pruning (neglecting noncontributing models) did not show accuracy improvements. It was also found that switching to per-bagging-averaged outputs Y1 as inputs for the second-level learner does not result in significant changes with respect to the coefficient of determination. As part of the hyperparameter optimization, a linear regressor with a non-negativity constraint was predominantly the best-performing second-level learner. This agrees with the findings in Ref. [48].
5.3 Suggested Workflow.
The idea in the present work is to fit a surrogate model based on a separate, smaller DoE and then evaluate this surrogate model for the more expensive SA. This surrogate-assisted SA clearly offers significant advantages in terms of computational costs. An attempt to quantify this advantage and its influence on result quality is presented in Sec. 6 for the application problem introduced above. A schematic comparing the classic SA workflow with the surrogate-assisted SA utilized here is depicted in Fig. 3.
In the classic SA workflow, a variant of Sobol sampling is used in combination with the simulation model to determine the sensitivity indices from the respective objective function values. In the surrogate-assisted workflow, a separate DoE (here based on OLHS) is introduced that is much smaller than the DoE necessary for the SA. From this smaller DoE, a surrogate model (e.g., kriging or the ensemble model) can be fitted. The surrogate model is subsequently used to calculate the Sobol samples necessary for SA.
6 Results and Discussion
In the following, we present, compare, and discuss the results for the two different SA approaches. Our two introduced surrogate models are compared with the established GSA packages GSAreport [16,49] and UQpyLab [15,50]. The former uses random forest regression as an internal surrogate model, and the latter employs polynomial chaos expansion (PCE) to determine sensitivity indices (see Table 5 in Appendix C for more details). For reference, we employ a GSA based solely on mechanical simulations. All surrogate models are fitted using the five parameters given in Table 1. SA is performed using N = 1024, unless otherwise stated. Thus, we end up with 7168 FEM simulations for our mechanical reference (compare Eq. (3)). We choose to train a surrogate model to predict one drawability measure. This ensures flexibility of their use in other scenarios, such as optimization loops. Furthermore, this design choice allows the models to be retrained if their accuracy is deemed low. We assume that engineers are mainly interested in the total influence of a parameter. Therefore, we evaluate sensitivities using the Total Order SI, as it includes all effects (direct and higher-order) of a parameter.
Surrogate | Parameter | ns = 500 | ns = 50 |
---|---|---|---|
GPR | t | 0.08 (1) | 0.27 (2) |
r | 0.05 (3) | 0.12 (2) | |
σy | 0.91 (3) | 1.00 (4) | |
1.00 (2) | 3.61 (3) | ||
0.16 (1) | 1.0 (3) | ||
Ensemble | t | 0.33 (3) | 0.36 (4) |
r | 0.19 (4) | 0.24 (3) | |
σy | 0.71 (2) | 0.93 (2) | |
0.40 (1) | 0.82 (1) | ||
0.79 (3) | 0.43 (1) | ||
UQpyLab | t | 0.19 (2) | 0.05 (1) |
r | 0.03 (1) | 0.40 (4) | |
σy | 0.14 (1) | 0.04 (1) | |
1.02 (4) | 5.61 (4) | ||
0.32 (2) | 2.81 (4) | ||
GSAreport | t | 0.33 (4) | 0.35 (3) |
r | 0.04 (2) | 0.06 (1) | |
σy | 0.93 (4) | 0.99 (3) | |
1.00 (3) | 1.0 (2) | ||
1.00 (4) | 0.98 (2) |
Surrogate | Parameter | ns = 500 | ns = 50 |
---|---|---|---|
GPR | t | 0.08 (1) | 0.27 (2) |
r | 0.05 (3) | 0.12 (2) | |
σy | 0.91 (3) | 1.00 (4) | |
1.00 (2) | 3.61 (3) | ||
0.16 (1) | 1.0 (3) | ||
Ensemble | t | 0.33 (3) | 0.36 (4) |
r | 0.19 (4) | 0.24 (3) | |
σy | 0.71 (2) | 0.93 (2) | |
0.40 (1) | 0.82 (1) | ||
0.79 (3) | 0.43 (1) | ||
UQpyLab | t | 0.19 (2) | 0.05 (1) |
r | 0.03 (1) | 0.40 (4) | |
σy | 0.14 (1) | 0.04 (1) | |
1.02 (4) | 5.61 (4) | ||
0.32 (2) | 2.81 (4) | ||
GSAreport | t | 0.33 (4) | 0.35 (3) |
r | 0.04 (2) | 0.06 (1) | |
σy | 0.93 (4) | 0.99 (3) | |
1.00 (3) | 1.0 (2) | ||
1.00 (4) | 0.98 (2) |
The cells display the relative error of the SI means compared to the mechanical reference, as well as the rank of each surrogate for each parameter (in parentheses).
To investigate whether SIs can be accurately calculated based on our surrogate predictions, they are compared with the SI of GSAreport, UQpyLab, and the mechanical reference simulation in Fig. 4.
There is a relatively large confidence interval (CI) associated with the mechanical reference and most models for all SIs (confidence level of 95%). This roots in the choice of the objective function and is further discussed in Fig. 5. For completeness, the same comparison for the thinning objective is shown in Fig. 8 in Appendix B. UQpyLab does not show CIs, as here the SI calculation is deterministic. Still, the SIs are comparable because of the high probability that the correct value is near the mean. The SIs of all surrogate models are similar to those of the mechanical simulation. High sensitivities for sheet thickness t and the Lankford coefficient r are calculated, as well as low sensitivities for the yield stress , the blankholder force , and the dynamic coefficient of friction . The SIs of the GPR surrogate deviate less for the two most sensitive features and , while the ensemble surrogate performs better for the design variables and blankholder force . We conclude that our models are applicable to the SI computation. Therefore, computationally expensive runs can be drastically reduced, making SI computation applicable in the early design phase.
UQpyLab returns SIs relatively close to the reference throughout. As for GSAreport, it also detects SIs magnitudes, but it deviates more from the reference, especially for less sensitive parameters. This pattern is also evident in Fig. 8. We attribute the nonoptimality of the performance of GSAreport to the fact that it was originally designed to operate using internally generated samples based on specific sampling techniques suitable for various sensitivity analysis algorithms (e.g., a Sobol scheme). Despite offering the option to fit a random forest model to existing data, the resulting interpolated model exhibited low accuracy, with an R2 value of 0.52 for the 500-samples case discussed in Fig. 4. This limitation significantly impacts the predictions, leading to limited accuracy in the GSA analysis. Consequently, GSA report does not appear to be the most suitable choice for conducting GSA on pre-existing data.
Having illustrated that the surrogate models are capable of contributing to the SI calculation, the question arises of how few simulations are needed to setup well-performing surrogate models. The SIs of the here most sensitive design variable (sheet thickness t) per surrogate over the number of simulations used for their fitting (training) are shown in Fig. 6.
Our surrogate models and UQpyLab converge to the same SI, which is different from the mechanical reference here. As little change is observed beyond 100 training samples, the remaining difference is likely not due to limited surrogate model accuracy. This is further confirmed by nonimproving R2 values of approximately 0.93 for all GPR and ensemble models based on 50 or more training samples. Sobol indices for GSAreport do not converge for the investigated number of samples. Moreover, it deviates more from the mechanical reference than the other three methods.
All surrogate models are fitted to identical training data sets, which are created using OLHS and might lead to a very similar fit of the respective surrogate models. This best explains the small difference between the SIs for the different surrogate models. Our best explanation for the remaining difference between all surrogates and the mechanical reference even for large training sets is the different sampling method used for surrogate training and for SA by SALib.2
Moreover, the surrogate models' negligible alterations in SI over an increasing number of data points demonstrate their data efficiency. Supplementary, the convergence plot for the second most sensitive parameter Lankford coefficient is given in Fig. 9 (Appendix B). We detect the same pattern for the remaining design variables. Our surrogate models perform very similarly. The ensemble model, showing good results from ns = 50 onwards, converges at ns = 100, while the SIs of the GPR model hardly change from ns = 50 onward. The same behavior can be noted for the UQpyLab model. Again, the GSA report model deviates more from the mechanical reference and the other models.
Overall, our surrogate models are sufficiently accurate with very few base simulations, allowing significant savings in computational cost (up to 99.3% fewer calculations3).
For deep drawing, SIs reveal the potential of changing certain design variables to obtain a better drawing configuration. To detect the most relevant influences, it is important to first determine the magnitude of the sensitivities. Second, when relying on design variables representing cost-effective changes in the drawing configuration (e.g., changes in geometry), the exact SI might differ enough to affect the order in which setup changes are made. However, substantial differences in global sensitivities are reliably detected. Therefore, the authors recommend grouping surrogate-based SI of similar values and making further configuration changes accordingly.
Sobol indices are calculated using a scalar target value, which is derived from the evaluation of the mechanical simulation during postprocessing. This evaluation notably affects the SI calculation, as illustrated in Fig. 5. For both objective functions, the parameters show similar SI magnitudes. However, the thinning evaluation shows a decrease in SI for each design variable. Here, the parameters , and are close to zero and therefore are insensitive to the objective. This underlines the importance of choosing an objective that represents the problem at hand, as sensitivities and their associated drawing configuration modifications can vary considerably. Furthermore, the objective functions lead to different magnitudes of the CI per design variable. The distances objective shows larger CIs throughout. Therefore, the landscape of the thinning objective may be more suitable for the SI computation. However, it is essential to emphasize that the evaluation of the distances objective more accurately encapsulates the concept of drawability, as it also takes into account wrinkling. To overcome the limitation of infeasible computational effort to reduce CIs, surrogate models are suitable to largely increase N (compare Eq. (3)), as their evaluation times are insignificant.
When operating with a limited computational budget or an objective function that inherently yields large CIs, these might be too large to rely on the calculated SI. To address this limitation, our surrogate models' negligible evaluation times can be used to run the SI calculation with considerably higher N. Given approximately the same computational budget (), SIs and their corresponding CIs for the distances objective function are shown in Fig. 7 for direct evaluation of the mechanical simulation compared to the surrogate-based SI computation. For our surrogates, we choose .
The mechanical simulation with limited budget detects the aforementioned magnitudes of SIs for every parameter when comparing it to the mechanical reference run (nmechref = 7168). Sheet thickness and Lankford coefficient are considered sensitive, whereas the remaining parameters are not. The corresponding CIs are large as expected (N = 8). The mechanical simulation does not exactly resemble the reference, but the overall pattern in SI magnitude is discernible. Here, the very low number of samples leads to a mean of . As Total Order SI should never be larger than 1.0, this in combination with the large CI illustrates the need for a surrogate model in this case. On the other hand, SIs based on the limited-budget surrogate predictions differ noticeably from each other. This reveals the influence of different surrogate approaches, learning different patterns, resulting in different predictions and therefore SIs. Generally, none of the limited-budget surrogates for all parameters resemble the mechanical reference. Concerning the magnitudes of the SIs, a user will still get a fair idea of which parameters are sensitive. Compared to the surrogates presented in Fig. 4, there is potential to improve SI similarity by adding more samples. The surrogate results for the thinning drawability measure (see Fig. 10 in Appendix B) are significantly better, again stressing the importance of selecting a representative drawability measure.
Although the surrogate evaluation time becomes relevant for a large number of samples, it is still magnitudes less significant than other factors such as simulation time of a single configuration here. It depends on the user's trust in the surrogate models, the real-world impact of the design variables, desired confidence level, and computational budget to assess whether simulation-based SIs with immense CIs are less credible than an estimation with negligible CIs using surrogate models.
Tables 3 and 4 provide a comprehensive overview of the utilized surrogate models for the distances and thinning drawability measures, respectively. Displayed are the relative errors of the surrogate models compared to the mechanical reference. Both, models under convergence and surrogates under limited data are included in the comparison.
Surrogate | Parameter | ns = 500 | ns = 50 |
---|---|---|---|
GPR | t | 0.29 (1) | 0.28 (1) |
r | 0.29 (1) | 0.29 (1) | |
σy | 0.13 (1) | 0.27 (2) | |
1.00 (2) | 1.00 (2) | ||
0.86 (1) | 0.90 (2) | ||
Ensemble | t | 0.29 (1) | 0.32 (3) |
r | 0.29 (1) | 0.34 (3) | |
σy | 1.09 (4) | 1.01 (4) | |
6.64 (4) | 11.72 (4) | ||
2.7 (4) | 3.67 (4) | ||
UQpyLab | t | 0.29 (1) | 0.28 (1) |
r | 0.29 (1) | 0.29 (1) | |
σy | 0.14 (2) | 0.25 (1) | |
0.95 (1) | 0.89 (1) | ||
0.86 (1) | 0.65 (1) | ||
GSA report | t | 0.45 (4) | 0.54 (4) |
r | 0.43 (4) | 0.55 (4) | |
σy | 1.00 (3) | 0.95 (3) | |
1.00 (2) | 1.46 (3) | ||
1.00 (3) | 0.91 (3) |
Surrogate | Parameter | ns = 500 | ns = 50 |
---|---|---|---|
GPR | t | 0.29 (1) | 0.28 (1) |
r | 0.29 (1) | 0.29 (1) | |
σy | 0.13 (1) | 0.27 (2) | |
1.00 (2) | 1.00 (2) | ||
0.86 (1) | 0.90 (2) | ||
Ensemble | t | 0.29 (1) | 0.32 (3) |
r | 0.29 (1) | 0.34 (3) | |
σy | 1.09 (4) | 1.01 (4) | |
6.64 (4) | 11.72 (4) | ||
2.7 (4) | 3.67 (4) | ||
UQpyLab | t | 0.29 (1) | 0.28 (1) |
r | 0.29 (1) | 0.29 (1) | |
σy | 0.14 (2) | 0.25 (1) | |
0.95 (1) | 0.89 (1) | ||
0.86 (1) | 0.65 (1) | ||
GSA report | t | 0.45 (4) | 0.54 (4) |
r | 0.43 (4) | 0.55 (4) | |
σy | 1.00 (3) | 0.95 (3) | |
1.00 (2) | 1.46 (3) | ||
1.00 (3) | 0.91 (3) |
The cells display the relative error of the SI means compared to the mechanical reference, as well as the rank of each surrogate for each parameter (in parentheses)
GSA report | |
---|---|
Parameter | Value |
Version internal surrogate model | 1.0.1 random forest |
All other settings default. | |
UQpyLab | |
Parameter | Value |
Version internal surrogate model polynomial degree | 0.9.5 polynomial chaos expansion up to ten |
All other settings default. |
GSA report | |
---|---|
Parameter | Value |
Version internal surrogate model | 1.0.1 random forest |
All other settings default. | |
UQpyLab | |
Parameter | Value |
Version internal surrogate model polynomial degree | 0.9.5 polynomial chaos expansion up to ten |
All other settings default. |
7 Conclusions
In the present work, Sobol's method for quantitative global sensitivity analysis is complemented with surrogate modeling techniques to assess an exemplary cross-die deep drawing problem. The drawability of the component is used as the target metric for the sensitivity analysis. Two different drawability metrics are compared for context. Five input parameters are defined, including material, process, and geometry parameters. As surrogate models, a standard GPR model and an ensemble model are investigated. In addition, the established approaches of UQpyLab and GSAreport are included in the comparison. Surrogate-assisted sensitivity indices are compared to reference values obtained by performing a high number of finite element simulations.
For both target functions, surrogate-assisted approaches are able to obtain very good sensitivity indices that are qualitatively equivalent to the mechanical reference at less than 1% of the computational cost. The remaining small differences are explained by the limited quality of the surrogate models and a dependence on the DoE. In addition, the difference between the results for the two target functions is shown to be larger than the error of the surrogate-assisted approach. This highlights once again the importance of careful consideration with the choice of drawability measure for sheet metal forming problems. Furthermore, all investigated models exhibit different deviations from the mechanical reference. Rather than choosing the mathematically most accurate model, one may select a surrogate that captures the overall pattern of magnitudes of the SI without significant outliers. This better ensures the aforementioned grouping of sensitive parameters.
Our results confirm that a surrogate-assisted sensitivity analysis approach can be applied to significantly reduce computational costs at earlier design stages where simulation models are already available, but design changes are still necessary. However, it remains to be investigated on a more complex, realistic application example. This may also include the use of categorical input parameters which are arguably more realistic, for example, for the sheet thickness.
We do believe that there is potential for further reduction of the computational requirements of the surrogate-assisted approach, for example, by employing a multifidelity approach or utilizing recently popular model order reduction techniques. Moreover, using the Saltelli-improved Sobol sequence for fitting the surrogates might also improve their accuracy. We deem it also interesting for future work to expand the present investigation to include different design variable distributions and to fully utilize the potentials of the surrogate models such as by including the GPR prediction uncertainty into SA estimations.
Funding Data
Federal Ministry for Economic Affairs and Climate Action (BMWK) (Award Nos.: KK5004606 and KK5339801BD1; Funder ID: 10.13039/501100006360).
Conflict of Interest
There is no conflict of interest.
Data Availability Statement
The detailed information on the proposed methods, simulation models, and the software used in this article can be found in the corresponding sections. Generated data and code can be obtained from the corresponding authors upon reasonable request.
Appendix A: Supplementary Information on the Simulation Model
Appendix B: Complementary Surrogate Results
Appendix C: Parameters Used in Other Surrogate-Based Sensitivity Analysis Libraries
Footnotes
Calculated with the shared memory parallel (smp) R13 LS-DYNA solver with single precision on 10 cores using Intel® Core™i9-10900K CPU @ 3.70 GHz processor.
A preliminary test with an ensemble model fitted on 128 points sampled through Sobol instead of OLHS showed a smaller difference between the SI computed on the surrogate model and the one computed on the mechanical model. This difference is approximately 0.06, contrasting with the 0.15 observed in Fig. 6. We attribute this to the higher accuracy of the model, indicated by an R2 score of 0.98. Further investigation into this matter is reserved for our future work.
Considering converged surrogate performance for 50 base simulations compared to 7168 direct computations as deemed acceptable regarding CIs.