Abstract

In the early-stage development of sheet metal parts, key design properties of new structures must be specified. As these decisions are made under significant uncertainty regarding drawing configuration changes, they sometimes result in the development of new parts that, at a later design stage, will not be drawable. As a result, there is a need to increase the certainty of experience-driven drawing configuration decisions. Complementing this process with a global sensitivity analysis (GSA) can provide insight into the impact of various changes in drawing configurations on drawability, unveiling cost-effective strategies to ensure the drawability of new parts. However, when quantitative global sensitivity approaches, such as Sobol's method, are utilized, the computational requirements for obtaining Sobol indices can become prohibitive even for small application problems. To circumvent computational limitations, we evaluate the applicability of different surrogate models engaged in computing global design variable sensitivities for the drawability assessment of a deep-drawn component. Here, we show in an exemplary application problem, that both a standard Gaussian process regression (GPR) model and an ensemble model can provide commendable results at a fraction of the computational cost. We compare our surrogate models to existing approaches in the field. Furthermore, by comparing drawability measures we show that the error introduced by the surrogate models is of the same order of magnitude as that from the choice of drawability measure. In consequence, our surrogate models can improve the cost-effective development of a component in the early design phase.

1 Introduction

Deep drawing is a widely used technique for the forming of sheet metal components. Its application extends to the production of various components, such as structures or body parts, particularly within the automotive industry. The deep drawing process introduces various parameters that influence the manufacturability of the component, which may be interdependent with functional requirements on the component itself. As manufacturability must be ensured throughout the design process of a component, it requires frequent evaluation during the early design stages.

To avoid the high cost of repetitive physical experiments, numerical methods, such as finite element methods (FEM), have been developed from the 1960s. An early overview of the available methods is given by Makinouchi [1]. The more recent development of simulation methods for sheet metal forming is reviewed in Ref. [2].

As the increase in available computational capacity is outpaced by the increase in the level of detail and complexity in sheet metal forming simulation models, other numerical methods such as sensitivity analysis (SA) (e.g., Refs. [3] or [4]) or metamodeling are becoming more and more important. The former can be defined as determining the relative impact of different input variables on the total output variability. As such, SA can be an essential tool for reducing the number of design parameters but also for revealing cost-effective measures, including design changes to improve the functionality and drawability of parts [5].

In general, SA methods are divided into local and global methods [3]. Local methods are generally based on the approximation of partial derivatives around a fixed design point. This locality of the results and the missing possibility to estimate the input parameter interaction are the main drawbacks of local SA measures. Global methods, on the other hand, can remedy this, but usually come at significantly higher computational costs. Yet, they suit the early design stage, as the components assessed here are more dynamically evolving, rendering local sensitivities limited in their validity. GSA includes a variety of methods, from more qualitative screening methods, such as the one proposed in Ref. [6], to quantitative methods based on the variance decomposition of the function in question (e.g., Ref. [7]).

SA methods have been widely applied in sheet metal forming applications. The influence of process parameters on the drawability of a stainless steel cup is investigated by Padmanabhan et al. [8]. The influence of different shape parameters of the initial blank sheet was shown by Sattari et al. [9]. In these works, mainly qualitative methods are used due to computational requirements for variance decomposition methods. Originating from other fields of application [10,11], the idea of introducing a surrogate model before calculating global sensitivity measures was also successfully investigated and applied to sheet metal forming. While focusing on SA methods and their advances, Fruth [12] showed that a GPR surrogate model can be used to significantly reduce the computation times for such SA procedures. Lehrer et al. [13] compared sensitivities for one-step and multistep simulation schemes in deep drawing.

The contribution of the present work is twofold. First, we evaluate the applicability of different surrogate models for calculating global design variable sensitivities in assessing the drawability of a deep-drawn component. By this, we circumvent the limitation of otherwise infeasible computational budget requirements. Surrogate models used include a popular GPR approach as well as a customized machine learning approach proposed by Lehrer et al. [14] that has not been previously used in sensitivity analysis. Additionally, we include two common GSA libraries from the literature [15,16]. To validate result quality, all surrogate-based results are compared to traditional simulation-based results. Second, we investigate a recently proposed drawability metric [14] in the context of global sensitivity analysis and, as has rarely been done in previous works, illustrate the comparability with well-established metrics.

The structure of this paper is as follows: The design of experiments (DoE) [17] and the SA methods utilized in the present work are summarized in Secs. 2 and 3, respectively. Subsequently, the application problem is presented in Sec. 4 with a focus on the simulation model, design parameters, and drawability metrics compared here. The metamodels used for comparison to a standard global SA based on finite element simulations are introduced in Sec. 5. The results are presented and discussed in Sec. 6, followed by our conclusions and an outlook on potential future work (Sec. 7).

2 Design of Experiments

One of the essential initial steps in sensitivity analysis and the fitting of surrogate models is conducting a DoE [18]. Assuming little prior knowledge about the investigated function, that is, a black-box problem, the aim in both procedures is to achieve good coverage of the design space. Because there is no unique mathematical definition for this aim and thus no unique optimal way to distribute samples, DoE is still an active field of research. In the present study, we use Latin hypercube sampling and a so-called Sobol sampling scheme from the class of quasi-Monte Carlo (QMC) methods.

Monte Carlo sampling is very popular in a number of applications due to the simplicity of the approach. Samples are drawn pseudo-randomly from the design space. However, samples are generally not uniformly distributed in the design space. As a remedy, so-called QMC methods were developed, which are usually based on low-discrepancy sequences of prime numbers. Popular examples include Hammersley, Halton, Faure, and Sobol sequences (see, for example, Ref. [19] for more details). In the present work, the latter is used for sensitivity analysis. Furthermore, the modifications suggested by Saltelli [20] are applied to reduce the number of required samples in SA. Readers are referred to the original publication for more details of the modifications.

For fitting our surrogate models, we employ optimal Latin hypercube sampling (OLHS). A Latin hypercube design (LHD) [21] is commonly constructed as follows: A total of N samples in d dimensions is being selected. Each dimension of the design space is divided into N bins of equal probability, dividing the total design space into Nd cells. N cells are then randomly selected so that each bin in each dimension only contains a single selected cell. Within each selected cell, one sample is placed in the center or is randomly located [22]. The former case is applied here.

One of the most significant improvements to LHS that reduces common problems such as correlations is OLHS. The idea is to incrementally improve an LHD with respect to a space-filling criterion. In the present work, a simulated annealing algorithm is utilized, as suggested by Morris et al. [23]. It consists of random pairwise and coordinatewise swaps between samples. Newly generated samples are then always accepted if they improve the space-filling criterion, and they are accepted with a certain probability if they do not offer an improvement. Other optimization approaches for OLH have been suggested, for example, by Ye et al. [24] or Jin et al. [25].

In the present work, an in-house Python implementation of OLH is used to generate the DoE used for the surrogate models. This implements our assumption, that we start from a comparably small existing number of conducted simulations in the early design phase. The DoE for sensitivity analysis is based on the implementation of the Sobol sequence within the Python library SALib [26].

3 Sensitivity Analysis and Uncertainty Quantification

The sensitivity analysis methods used in this work are summarized in the following (see Ref. [3] for further details).

Variance-decomposition-based SA methods, such as Sobol's variance decomposition, are based on the idea that the variance V(Y) of a function Y=f(X1,X2,,Xk) depending on k input factors Xi can be decomposed as
(1)
Here, Vi represents the first-order effect of factor Xi on the output, whereas Vij and the following terms are called higher-order effects, which consider interactions between two or more inputs. From the above equation, the Sobol indices (SI) are introduced by dividing by the variance V(Y). Then, first- and higher-order indices are defined as
(2)
Since the number of higher-order terms increases exponentially with the number of input variables, the so-called total effects are introduced. The total Sobol index SiT represents the sum of the first-order and all higher-order effects of the factor Xi. When these indices are calculated on the basis of a Sobol sequence, as in the SALib library used in the present work [26], a total of
(3)

target function evaluations are required [20]. Here, N can be specified as the desired number of samples. d denotes the number of design variables.

4 Application Case

Deep drawing is a forming process that involves the conversion of flat sheet metal into three-dimensional shapes. This is achieved by drawing a blank into a die with the help of a punch. High forming energies, especially for large steel geometries, are induced by a press. Related mechanical influences on the process include plastic hardening of the blank, lubrication between the tools and the blank, shape of the deep-drawn part, and the local restraint of material flow into the die. Deep drawing is a fast and cost-efficient process when large quantities are needed.

Explicit FEM models are state-of-the-art for simulating deep drawing [27]. A rigid punch mesh is moved over time to enforce deformation on the blank via contacts. Analytical drawbead and blankholder forces affect material draw-in into the die. The material properties here are represented by a transverse anisotropic elasto-plastic material model with the Hill48 yield criterion [28], using Hockett-Sherby hardening [29]. The simulation scheme is computationally costly, as the maximum time-step is restricted by the Courant-Friedrichs-Lewy condition [30]. Therefore, mass scaling is applied to increase the allowable maximum time-step size at the cost of nonphysical added mass. Moreover, linear shape functions are employed for the deformable Belytschko-Tsay shell elements of the blank. Reduced integration is used to further reduce computational costs. Additionally, the adaptive remeshing technique allows for a fine mesh in areas of high deformation, whereas other regions are meshed more coarsely. These efforts result in simulation times of roughly 10 min1 for our cross-die example. This underlines the need for a further reduction in the evaluation time to compute SIs, as it involves at least hundreds to thousands of simulation runs. An exemplary setup of an FEM deep drawing model using LS-DYNA is given in Ref. [31]. Our cross-die simulation model is shown in Fig. 1. Supplementary simulation parameters are given in Table 2 in Appendix  A.

Fig. 1
The arrangement of the geometries starts from top to bottom. The die is set to a reference position, followed by the blank with a sheet thickness-dependent gap. The blank is followed both by the blankholder, which presses the blank against the die, and the punch. In our case, the die moves toward the punch until the maximum forming length is reached. Analytical drawbeads restrain the material flow.
Fig. 1
The arrangement of the geometries starts from top to bottom. The die is set to a reference position, followed by the blank with a sheet thickness-dependent gap. The blank is followed both by the blankholder, which presses the blank against the die, and the punch. In our case, the die moves toward the punch until the maximum forming length is reached. Analytical drawbeads restrain the material flow.
Close modal
Table 1

The five parameters used in this exemplary problem

SymbolParameter rangeParameter
t(0.8–2.0) mmSheet thickness
r(0.8–2.5)Lankford
coefficient
σy(140.0–180.0) MPaYield strength of the blank
fblkh(130.0–190.0) kNBlankholder force
μd(0.05–0.12)Dynamic coefficient of friction
SymbolParameter rangeParameter
t(0.8–2.0) mmSheet thickness
r(0.8–2.5)Lankford
coefficient
σy(140.0–180.0) MPaYield strength of the blank
fblkh(130.0–190.0) kNBlankholder force
μd(0.05–0.12)Dynamic coefficient of friction

This simulation scheme focuses on the development stage in which a first iteration of the tooling geometry has been designed. Readers interested in SA at an earlier design stage can opt for a one-step simulation scheme at the cost of less accurate results [13].

We select five parameters that include geometry, material, and process parameters for which we calculate SIs. These are listed in Table 2. Here, all parameters are assumed to follow uniform distributions.

Table 2

Key settings for the deep drawing simulation model

SymbolValueParameter
lcross290 mmCross-die length
wcross120 mmCross-die width
hcross29 mmCross-die depth
fdwb200 NAnalytical drawbead forces
E205 GPaYoung's modulus
ν0.3Poisson ratio
ρ7.9·109tmm3Mass density
σs(588.0–628.0) MPaSaturation stress
N(0.82–0.49)Material constant
p(0.70–0.64)Material constant
SymbolValueParameter
lcross290 mmCross-die length
wcross120 mmCross-die width
hcross29 mmCross-die depth
fdwb200 NAnalytical drawbead forces
E205 GPaYoung's modulus
ν0.3Poisson ratio
ρ7.9·109tmm3Mass density
σs(588.0–628.0) MPaSaturation stress
N(0.82–0.49)Material constant
p(0.70–0.64)Material constant

The hardening parameters vary due to adaptations regarding σy to ensure hardening-only curves.

Subsequently, the calculated blank deformations are evaluated. In general, the postprocessing is focused on inspecting the structural integrity of the drawn geometry (e.g., severe thinning and cracks), the determination of a press capable of applying the necessary restraining forces, and evaluating wrinkling to assess the feasibility of the process.

There are several methods to assess the drawability of complex shapes. Failure models, such as the generalized incremental stress-state-dependent damage model (GISSMO) [32], accumulate damage by an incremental formulation until failure. Another common practice is to investigate the deformation of the blank with a forming limit diagram (FLD). Here, regions of cracks and wrinkles can be confined according to the major and minor principal Hencky strains.

The drawability measure used as a reference in the present work is the average thickness reduction in the component over the drawing process. This measure has been widely applied for more than 20 years (e.g., Refs. [9,33,34]). In the present work, the definition by Guo et al. [33] is applied
(4)

where N represents the number of elements in the simulation model, h0 is the initial thickness of the blank sheet, and hte are the element thicknesses at the end of the drawing simulation.

The second drawability measure employed in the present work is based on the average distance of individual bad elements from their respective closest limit curve. It was originally suggested by Sun et al. [35] and extended to drawable configurations in [14]
(5)
where fnd denotes nondrawable and fd drawable configurations. Because most drawing setups exhibit negligible wrinkling, we adopt the drawability threshold of fdt=1.395·106 to classify the setups below it as drawable. Also, an element-wise area weighting is introduced for both drawable and nondrawable domains, to handle different mesh and element sizes
(6)
(7)

Here, Ae represents the ratio between the surface area of an element and the drawn blank. de is the minimum Euclidean distance of an element to the corresponding limit curve, n denotes the total number of elements, and w the weights. Depending on the subscript, the elements are associated with cracks c, wrinkles w, or good g (drawable) regions. To account for the influence of wrinkling, we assign a weight of ww=0.1. For scaling purposes, we set wg=0.2 for good points. The values of the factors are chosen as suggested in the original publication [14] due to the similarity of the application problems.

5 Deep Drawing Metamodels

In this section, we introduce the surrogate modeling approaches used in this study. First, we summarize GPR by introducing the most relevant concepts. Second, we introduce an ensemble method based on a machine learning architecture recently proposed by Lehrer et al. [14].

5.1 Gaussian Process Regression.

To obtain the GPR predictor for a d-dimensional function, first assume a random process for the function
(8)

where μ0 is an unknown constant and Z(x) is a stationary random process. Additionally, a sample dataset is required, denoted here as (S,yS). It consists of m samples with input data SRm×d and the output of the respective function ySRm.

To make predictions using the random process and the sampling dataset, a kernel is used to model the correlation between the sample points. Numerous kernel functions have been proposed, and the readers are referred to one of the textbooks such as [36] or popular software implementations of GPR (for example, Ref. [37]) for more details. Here, the well-established Matérn kernel with ν=32 is employed
(9)
where θk denotes the kernel length scale, representing the hyperparameter(s) of the GPR surrogate model. As there is an independent parameter for each dimension of the design space, the given kernel function is called anisotropic. Given the sample dataset, the GPR model is fitted by running a separate optimization for the kernel hyperparameters θk. Here, the popular L-BFGS-B algorithm [38] is utilized. More involved techniques for hyperparameter optimization have also been suggested; see, for example, Ref. [39]. For more detailed derivations and overviews, the readers are referred to the original publications [4042] and one of the GPR textbooks, such as [36].
With the representation of the random process, the sampling data, and the kernel function, the GPR predictor for a new point x can be written as
(10)

where r is the correlation vector between the sample data and the new point, RRm×m represents the correlation matrix between the sample data points and 1Rm a column vector filled with ones.

One potential advantage in the estimation of SIs using a GPR surrogate model is that it can provide prediction uncertainties together with the prediction. While this is not salvaged in the present work, also for ease of comparison with other methods, we believe it to be interesting to investigate how to include these uncertainties into SI estimation in future work.

Here, Gaussian process regression and kernel implementations of the scikit-learn library in Python are used [37].

5.2 Ensemble.

The same dataset (S,y) as in Sec. 5.1 is used. Here, an instance of the set is {xi,yi} with the features xiX of the input space and the target value yiY of the output space (supervised learning). To establish a surrogate model for the multivariate single-objective deep drawing problem, we use the ensemble method of stacked generalization (stacking), first proposed by Wolpert [43]. Here, first-level learners L1 are trained on the features of the dataset. The second-level learner L2 is then trained on the predictions of the first-level learners.

Stacking offers two distinct advantages. First, it enhances the accuracy and robustness of the final predictions by leveraging an ensemble of base estimators instead of relying on an individual model. Second, stacking exhibits greater flexibility in combining first-level predictions compared to methods such as averaging or voting, thereby improving accuracy.

To improve the prediction performance of the stacking surrogate, first-level learners should be diverse. This is achieved through two measures. First, the first-level learners are trained with a set of n different base regression algorithms A (see Fig. 2). Second, these algorithms are used to establish a set of bootstrap aggregation (bagging) estimators [44] trained in k-fold subsets of the dataset. This follows the combination approach proposed by Wolpert et al. [45]. A comprehensive overview of the ensemble methods can be found in [46]. Additionally, Mendes-Moreira et al. [47] cover regression ensembles.

Fig. 2
Based on the feature set x, bagging is used to generate first-level learners L1 from the base algorithms A. Subsequently, their predictions serve as input to the second-level learner L2. At last, the second-level learner predicts the target value y.
Fig. 2
Based on the feature set x, bagging is used to generate first-level learners L1 from the base algorithms A. Subsequently, their predictions serve as input to the second-level learner L2. At last, the second-level learner predicts the target value y.
Close modal
However, simply increasing the number of first-level learners does not guarantee an enhancement in the ensemble prediction accuracy. This is exemplified by a linear regression second-level learner
(11)

which incorporates a number of base algorithms nA and a number of bagging models per base algorithm nB. As its weights wi,j are adjusted through least squares minimization, the addition of more first-level learners introduces additional terms with weights to be optimized. As long as a new first-level learner does not contribute new information, its weight is minimized. Still, by adding a further term, even if the remaining weight is close to zero, an additional residual error ei,j1 will be propagated to the final prediction y, leading to error accumulation. In either scenario, the introduction of more first-level learners increases the complexity of optimizing the second-level learner, rendering it susceptible to suboptimal results. Due to these considerations, a balance between the number of first-level learners (quantity) and their information contribution (quality) should be sought.

We use the implementations of scikit-learn [37] for the configuration of the ensemble model. Its architecture is shown in Fig. 2.

Ten base algorithms are employed, namely, lasso, elastic net, Gaussian process, decision tree, k nearest neighbors, stochastic gradient descent, multilayer perceptron, adaptive boosting, and support vector regression. The number of first-level learners per base algorithm is set during hyperparameter optimization. The second-level learner here is also chosen by hyperparameter optimization from a set of estimators. These are Ridge, Bayesian Ridge, and linear regression with non-negativity constraints. The ensemble's hyperparameters are optimized with a grid search optimization, during which the contributions of the first-level learners are adjusted as well. Five-fold cross-validation is conducted to mitigate the effects of overfitting.

Alternative architectures with ensemble pruning (neglecting noncontributing models) did not show accuracy improvements. It was also found that switching to per-bagging-averaged outputs Y1 as inputs for the second-level learner does not result in significant changes with respect to the coefficient of determination. As part of the hyperparameter optimization, a linear regressor with a non-negativity constraint was predominantly the best-performing second-level learner. This agrees with the findings in Ref. [48].

5.3 Suggested Workflow.

The idea in the present work is to fit a surrogate model based on a separate, smaller DoE and then evaluate this surrogate model for the more expensive SA. This surrogate-assisted SA clearly offers significant advantages in terms of computational costs. An attempt to quantify this advantage and its influence on result quality is presented in Sec. 6 for the application problem introduced above. A schematic comparing the classic SA workflow with the surrogate-assisted SA utilized here is depicted in Fig. 3.

Fig. 3
Schematic representation of the workflow for a sensitivity analysis (SA). Compared are a “classic,” simulation-based approach and the surrogate-assisted approach utilized inthe present work.
Fig. 3
Schematic representation of the workflow for a sensitivity analysis (SA). Compared are a “classic,” simulation-based approach and the surrogate-assisted approach utilized inthe present work.
Close modal

In the classic SA workflow, a variant of Sobol sampling is used in combination with the simulation model to determine the sensitivity indices from the respective objective function values. In the surrogate-assisted workflow, a separate DoE (here based on OLHS) is introduced that is much smaller than the DoE necessary for the SA. From this smaller DoE, a surrogate model (e.g., kriging or the ensemble model) can be fitted. The surrogate model is subsequently used to calculate the Sobol samples necessary for SA.

The FE solver LS-DYNA is used here for all simulations. All software is implemented in Python. For sensitivity analysis, the common library SALib [26] is used. The GPR surrogate models and ensemble models are fitted utilizing scikit-learn [37].

6 Results and Discussion

In the following, we present, compare, and discuss the results for the two different SA approaches. Our two introduced surrogate models are compared with the established GSA packages GSAreport [16,49] and UQpyLab [15,50]. The former uses random forest regression as an internal surrogate model, and the latter employs polynomial chaos expansion (PCE) to determine sensitivity indices (see Table 5 in Appendix  C for more details). For reference, we employ a GSA based solely on mechanical simulations. All surrogate models are fitted using the five parameters given in Table 1. SA is performed using N =1024, unless otherwise stated. Thus, we end up with 7168 FEM simulations for our mechanical reference (compare Eq. (3)). We choose to train a surrogate model to predict one drawability measure. This ensures flexibility of their use in other scenarios, such as optimization loops. Furthermore, this design choice allows the models to be retrained if their accuracy is deemed low. We assume that engineers are mainly interested in the total influence of a parameter. Therefore, we evaluate sensitivities using the Total Order SI, as it includes all effects (direct and higher-order) of a parameter.

Table 3

Comprehensive overview of the surrogate models' accuracies in the computation of total order Sobol indices for a fully converged set of training samples ns = 500 and under limited data ns = 50 for the distances drawability measure

SurrogateParameterns = 500ns = 50
GPRt0.08 (1)0.27 (2)
r0.05 (3)0.12 (2)
σy0.91 (3)1.00 (4)
fblkh1.00 (2)3.61 (3)
μd0.16 (1)1.0 (3)
Ensemblet0.33 (3)0.36 (4)
r0.19 (4)0.24 (3)
σy0.71 (2)0.93 (2)
fblkh0.40 (1)0.82 (1)
μd0.79 (3)0.43 (1)
UQpyLabt0.19 (2)0.05 (1)
r0.03 (1)0.40 (4)
σy0.14 (1)0.04 (1)
fblkh1.02 (4)5.61 (4)
μd0.32 (2)2.81 (4)
GSAreportt0.33 (4)0.35 (3)
r0.04 (2)0.06 (1)
σy0.93 (4)0.99 (3)
fblkh1.00 (3)1.0 (2)
μd1.00 (4)0.98 (2)
SurrogateParameterns = 500ns = 50
GPRt0.08 (1)0.27 (2)
r0.05 (3)0.12 (2)
σy0.91 (3)1.00 (4)
fblkh1.00 (2)3.61 (3)
μd0.16 (1)1.0 (3)
Ensemblet0.33 (3)0.36 (4)
r0.19 (4)0.24 (3)
σy0.71 (2)0.93 (2)
fblkh0.40 (1)0.82 (1)
μd0.79 (3)0.43 (1)
UQpyLabt0.19 (2)0.05 (1)
r0.03 (1)0.40 (4)
σy0.14 (1)0.04 (1)
fblkh1.02 (4)5.61 (4)
μd0.32 (2)2.81 (4)
GSAreportt0.33 (4)0.35 (3)
r0.04 (2)0.06 (1)
σy0.93 (4)0.99 (3)
fblkh1.00 (3)1.0 (2)
μd1.00 (4)0.98 (2)

The cells display the relative error of the SI means compared to the mechanical reference, as well as the rank of each surrogate for each parameter (in parentheses).

To investigate whether SIs can be accurately calculated based on our surrogate predictions, they are compared with the SI of GSAreport, UQpyLab, and the mechanical reference simulation in Fig. 4.

Fig. 4
SI for the ensemble model, GPR surrogate, UQpyLab, and GSAreport. The SI based on the mechanical simulation is given as reference. Results are shown on the distances objective function for every parameter (see Table 1).
Fig. 4
SI for the ensemble model, GPR surrogate, UQpyLab, and GSAreport. The SI based on the mechanical simulation is given as reference. Results are shown on the distances objective function for every parameter (see Table 1).
Close modal

There is a relatively large confidence interval (CI) associated with the mechanical reference and most models for all SIs (confidence level of 95%). This roots in the choice of the objective function and is further discussed in Fig. 5. For completeness, the same comparison for the thinning objective is shown in Fig. 8 in Appendix  B. UQpyLab does not show CIs, as here the SI calculation is deterministic. Still, the SIs are comparable because of the high probability that the correct value is near the mean. The SIs of all surrogate models are similar to those of the mechanical simulation. High sensitivities for sheet thickness t and the Lankford coefficient r are calculated, as well as low sensitivities for the yield stress σy, the blankholder force fblkh, and the dynamic coefficient of friction μd. The SIs of the GPR surrogate deviate less for the two most sensitive features and μd, while the ensemble surrogate performs better for the design variables σy and blankholder force fblkh. We conclude that our models are applicable to the SI computation. Therefore, computationally expensive runs can be drastically reduced, making SI computation applicable in the early design phase.

Fig. 5
SIs of the distances and thinning objective for the five design variables based on the mechanical simulation (reference)
Fig. 5
SIs of the distances and thinning objective for the five design variables based on the mechanical simulation (reference)
Close modal
Fig. 6
SIs for sheet thickness variable of thinning objective function for all surrogate models over the number of simulations used for their setup. A stand-alone OLHS was conducted for each of the investigated number of simulations ns. The dashed black line indicates the SI of the mechanical simulation (reference).
Fig. 6
SIs for sheet thickness variable of thinning objective function for all surrogate models over the number of simulations used for their setup. A stand-alone OLHS was conducted for each of the investigated number of simulations ns. The dashed black line indicates the SI of the mechanical simulation (reference).
Close modal

UQpyLab returns SIs relatively close to the reference throughout. As for GSAreport, it also detects SIs magnitudes, but it deviates more from the reference, especially for less sensitive parameters. This pattern is also evident in Fig. 8. We attribute the nonoptimality of the performance of GSAreport to the fact that it was originally designed to operate using internally generated samples based on specific sampling techniques suitable for various sensitivity analysis algorithms (e.g., a Sobol scheme). Despite offering the option to fit a random forest model to existing data, the resulting interpolated model exhibited low accuracy, with an R2 value of 0.52 for the 500-samples case discussed in Fig. 4. This limitation significantly impacts the predictions, leading to limited accuracy in the GSA analysis. Consequently, GSA report does not appear to be the most suitable choice for conducting GSA on pre-existing data.

Having illustrated that the surrogate models are capable of contributing to the SI calculation, the question arises of how few simulations are needed to setup well-performing surrogate models. The SIs of the here most sensitive design variable (sheet thickness t) per surrogate over the number of simulations used for their fitting (training) are shown in Fig. 6.

Fig. 7
SIs of our GPR and ensemble surrogates, as well as UQpyLab and GSAreport. The surrogate based SIs are complemented with the mechanical simulation with limited budget and the mechanical simulation reference for the distances drawability measure.
Fig. 7
SIs of our GPR and ensemble surrogates, as well as UQpyLab and GSAreport. The surrogate based SIs are complemented with the mechanical simulation with limited budget and the mechanical simulation reference for the distances drawability measure.
Close modal

Our surrogate models and UQpyLab converge to the same SI, which is different from the mechanical reference here. As little change is observed beyond 100 training samples, the remaining difference is likely not due to limited surrogate model accuracy. This is further confirmed by nonimproving R2 values of approximately 0.93 for all GPR and ensemble models based on 50 or more training samples. Sobol indices for GSAreport do not converge for the investigated number of samples. Moreover, it deviates more from the mechanical reference than the other three methods.

All surrogate models are fitted to identical training data sets, which are created using OLHS and might lead to a very similar fit of the respective surrogate models. This best explains the small difference between the SIs for the different surrogate models. Our best explanation for the remaining difference between all surrogates and the mechanical reference even for large training sets is the different sampling method used for surrogate training and for SA by SALib.2

Moreover, the surrogate models' negligible alterations in SI over an increasing number of data points demonstrate their data efficiency. Supplementary, the convergence plot for the second most sensitive parameter Lankford coefficient is given in Fig. 9 (Appendix  B). We detect the same pattern for the remaining design variables. Our surrogate models perform very similarly. The ensemble model, showing good results from ns = 50 onwards, converges at ns = 100, while the SIs of the GPR model hardly change from ns = 50 onward. The same behavior can be noted for the UQpyLab model. Again, the GSA report model deviates more from the mechanical reference and the other models.

Fig. 8
SI for the ensemble model, GPR surrogate, UQpyLab, and GSAreport. The SI based on the mechanical simulation is given as reference. Results are shown on the thinning objective function for every parameter (see Table 1).
Fig. 8
SI for the ensemble model, GPR surrogate, UQpyLab, and GSAreport. The SI based on the mechanical simulation is given as reference. Results are shown on the thinning objective function for every parameter (see Table 1).
Close modal

Overall, our surrogate models are sufficiently accurate with very few base simulations, allowing significant savings in computational cost (up to 99.3% fewer calculations3).

For deep drawing, SIs reveal the potential of changing certain design variables to obtain a better drawing configuration. To detect the most relevant influences, it is important to first determine the magnitude of the sensitivities. Second, when relying on design variables representing cost-effective changes in the drawing configuration (e.g., changes in geometry), the exact SI might differ enough to affect the order in which setup changes are made. However, substantial differences in global sensitivities are reliably detected. Therefore, the authors recommend grouping surrogate-based SI of similar values and making further configuration changes accordingly.

Sobol indices are calculated using a scalar target value, which is derived from the evaluation of the mechanical simulation during postprocessing. This evaluation notably affects the SI calculation, as illustrated in Fig. 5. For both objective functions, the parameters show similar SI magnitudes. However, the thinning evaluation shows a decrease in SI for each design variable. Here, the parameters σy,fblkh, and μd are close to zero and therefore are insensitive to the objective. This underlines the importance of choosing an objective that represents the problem at hand, as sensitivities and their associated drawing configuration modifications can vary considerably. Furthermore, the objective functions lead to different magnitudes of the CI per design variable. The distances objective shows larger CIs throughout. Therefore, the landscape of the thinning objective may be more suitable for the SI computation. However, it is essential to emphasize that the evaluation of the distances objective more accurately encapsulates the concept of drawability, as it also takes into account wrinkling. To overcome the limitation of infeasible computational effort to reduce CIs, surrogate models are suitable to largely increase N (compare Eq. (3)), as their evaluation times are insignificant.

When operating with a limited computational budget or an objective function that inherently yields large CIs, these might be too large to rely on the calculated SI. To address this limitation, our surrogate models' negligible evaluation times can be used to run the SI calculation with considerably higher N. Given approximately the same computational budget (nsurrogates=50,nmechsim=56), SIs and their corresponding CIs for the distances objective function are shown in Fig. 7 for direct evaluation of the mechanical simulation compared to the surrogate-based SI computation. For our surrogates, we choose N=216.

Fig. 9
SIs for Lankford coefficient variable of thinning objective function for all surrogate models over the number of simulations used for their setup. A stand-alone OLHS was conducted for each of the investigated number of simulations ns. The dashed black line indicates the SI of the mechanical simulation (reference).
Fig. 9
SIs for Lankford coefficient variable of thinning objective function for all surrogate models over the number of simulations used for their setup. A stand-alone OLHS was conducted for each of the investigated number of simulations ns. The dashed black line indicates the SI of the mechanical simulation (reference).
Close modal

The mechanical simulation with limited budget detects the aforementioned magnitudes of SIs for every parameter when comparing it to the mechanical reference run (nmechref = 7168). Sheet thickness and Lankford coefficient are considered sensitive, whereas the remaining parameters are not. The corresponding CIs are large as expected (N =8). The mechanical simulation does not exactly resemble the reference, but the overall pattern in SI magnitude is discernible. Here, the very low number of samples leads to a mean of SrTdistances>1.0. As Total Order SI should never be larger than 1.0, this in combination with the large CI illustrates the need for a surrogate model in this case. On the other hand, SIs based on the limited-budget surrogate predictions differ noticeably from each other. This reveals the influence of different surrogate approaches, learning different patterns, resulting in different predictions and therefore SIs. Generally, none of the limited-budget surrogates for all parameters resemble the mechanical reference. Concerning the magnitudes of the SIs, a user will still get a fair idea of which parameters are sensitive. Compared to the surrogates presented in Fig. 4, there is potential to improve SI similarity by adding more samples. The surrogate results for the thinning drawability measure (see Fig. 10 in Appendix  B) are significantly better, again stressing the importance of selecting a representative drawability measure.

Fig. 10
SIs of our GPR and ensemble surrogates, as well as UQpyLab and GSAreport. The surrogate-based SIs are complemented with the mechanical simulation with limited budget and the mechanical simulation reference for the thinning drawability measure.
Fig. 10
SIs of our GPR and ensemble surrogates, as well as UQpyLab and GSAreport. The surrogate-based SIs are complemented with the mechanical simulation with limited budget and the mechanical simulation reference for the thinning drawability measure.
Close modal

Although the surrogate evaluation time becomes relevant for a large number of samples, it is still magnitudes less significant than other factors such as simulation time of a single configuration here. It depends on the user's trust in the surrogate models, the real-world impact of the design variables, desired confidence level, and computational budget to assess whether simulation-based SIs with immense CIs are less credible than an estimation with negligible CIs using surrogate models.

Tables 3 and 4 provide a comprehensive overview of the utilized surrogate models for the distances and thinning drawability measures, respectively. Displayed are the relative errors of the surrogate models compared to the mechanical reference. Both, models under convergence and surrogates under limited data are included in the comparison.

Table 4

Comprehensive overview of the surrogate models' accuracies in the computation of total order Sobol indices for a fully converged set of training samples ns = 500 and under limited data ns = 50 for the thinning drawability measure

SurrogateParameterns = 500ns = 50
GPRt0.29 (1)0.28 (1)
r0.29 (1)0.29 (1)
σy0.13 (1)0.27 (2)
fblkh1.00 (2)1.00 (2)
μd0.86 (1)0.90 (2)
Ensemblet0.29 (1)0.32 (3)
r0.29 (1)0.34 (3)
σy1.09 (4)1.01 (4)
fblkh6.64 (4)11.72 (4)
μd2.7 (4)3.67 (4)
UQpyLabt0.29 (1)0.28 (1)
r0.29 (1)0.29 (1)
σy0.14 (2)0.25 (1)
fblkh0.95 (1)0.89 (1)
μd0.86 (1)0.65 (1)
GSA reportt0.45 (4)0.54 (4)
r0.43 (4)0.55 (4)
σy1.00 (3)0.95 (3)
fblkh1.00 (2)1.46 (3)
μd1.00 (3)0.91 (3)
SurrogateParameterns = 500ns = 50
GPRt0.29 (1)0.28 (1)
r0.29 (1)0.29 (1)
σy0.13 (1)0.27 (2)
fblkh1.00 (2)1.00 (2)
μd0.86 (1)0.90 (2)
Ensemblet0.29 (1)0.32 (3)
r0.29 (1)0.34 (3)
σy1.09 (4)1.01 (4)
fblkh6.64 (4)11.72 (4)
μd2.7 (4)3.67 (4)
UQpyLabt0.29 (1)0.28 (1)
r0.29 (1)0.29 (1)
σy0.14 (2)0.25 (1)
fblkh0.95 (1)0.89 (1)
μd0.86 (1)0.65 (1)
GSA reportt0.45 (4)0.54 (4)
r0.43 (4)0.55 (4)
σy1.00 (3)0.95 (3)
fblkh1.00 (2)1.46 (3)
μd1.00 (3)0.91 (3)

The cells display the relative error of the SI means compared to the mechanical reference, as well as the rank of each surrogate for each parameter (in parentheses)

Table 5

Overview of the settings used in GSAreport [16] and UQpyLab [15] libraries used in the present work as comparisons for our surrogate-based approaches

GSA report
ParameterValue
Version internal surrogate model1.0.1 random forest
All other settings default.
UQpyLab
ParameterValue
Version internal surrogate model polynomial degree0.9.5 polynomial chaos expansion up to ten
All other settings default.
GSA report
ParameterValue
Version internal surrogate model1.0.1 random forest
All other settings default.
UQpyLab
ParameterValue
Version internal surrogate model polynomial degree0.9.5 polynomial chaos expansion up to ten
All other settings default.

7 Conclusions

In the present work, Sobol's method for quantitative global sensitivity analysis is complemented with surrogate modeling techniques to assess an exemplary cross-die deep drawing problem. The drawability of the component is used as the target metric for the sensitivity analysis. Two different drawability metrics are compared for context. Five input parameters are defined, including material, process, and geometry parameters. As surrogate models, a standard GPR model and an ensemble model are investigated. In addition, the established approaches of UQpyLab and GSAreport are included in the comparison. Surrogate-assisted sensitivity indices are compared to reference values obtained by performing a high number of finite element simulations.

For both target functions, surrogate-assisted approaches are able to obtain very good sensitivity indices that are qualitatively equivalent to the mechanical reference at less than 1% of the computational cost. The remaining small differences are explained by the limited quality of the surrogate models and a dependence on the DoE. In addition, the difference between the results for the two target functions is shown to be larger than the error of the surrogate-assisted approach. This highlights once again the importance of careful consideration with the choice of drawability measure for sheet metal forming problems. Furthermore, all investigated models exhibit different deviations from the mechanical reference. Rather than choosing the mathematically most accurate model, one may select a surrogate that captures the overall pattern of magnitudes of the SI without significant outliers. This better ensures the aforementioned grouping of sensitive parameters.

Our results confirm that a surrogate-assisted sensitivity analysis approach can be applied to significantly reduce computational costs at earlier design stages where simulation models are already available, but design changes are still necessary. However, it remains to be investigated on a more complex, realistic application example. This may also include the use of categorical input parameters which are arguably more realistic, for example, for the sheet thickness.

We do believe that there is potential for further reduction of the computational requirements of the surrogate-assisted approach, for example, by employing a multifidelity approach or utilizing recently popular model order reduction techniques. Moreover, using the Saltelli-improved Sobol sequence for fitting the surrogates might also improve their accuracy. We deem it also interesting for future work to expand the present investigation to include different design variable distributions and to fully utilize the potentials of the surrogate models such as by including the GPR prediction uncertainty into SA estimations.

Funding Data

  • Federal Ministry for Economic Affairs and Climate Action (BMWK) (Award Nos.: KK5004606 and KK5339801BD1; Funder ID: 10.13039/501100006360).

Conflict of Interest

There is no conflict of interest.

Data Availability Statement

The detailed information on the proposed methods, simulation models, and the software used in this article can be found in the corresponding sections. Generated data and code can be obtained from the corresponding authors upon reasonable request.

Appendix A: Supplementary Information on the Simulation Model

Appendix B: Complementary Surrogate Results

Appendix C: Parameters Used in Other Surrogate-Based Sensitivity Analysis Libraries

Footnotes

1

Calculated with the shared memory parallel (smp) R13 LS-DYNA solver with single precision on 10 cores using Intel® Core™i9-10900K CPU @ 3.70 GHz processor.

2

A preliminary test with an ensemble model fitted on 128 points sampled through Sobol instead of OLHS showed a smaller difference between the SI computed on the surrogate model and the one computed on the mechanical model. This difference is approximately 0.06, contrasting with the 0.15 observed in Fig. 6. We attribute this to the higher accuracy of the model, indicated by an R2 score of 0.98. Further investigation into this matter is reserved for our future work.

3

Considering converged surrogate performance for 50 base simulations compared to 7168 direct computations as deemed acceptable regarding CIs.

References

1.
Makinouchi
,
A.
,
1996
, “
Sheet Metal Forming Simulation in Industry
,”
J. Mater. Process. Tech.
,
60
(
1–4
), pp.
19
26
.10.1016/0924-0136(96)02303-5
2.
Ablat
,
M. A.
, and
Qattawi
,
A.
,
2016
, “
Numerical Simulation of Sheet Metal Forming: A Review
,”
Int. J. Adv. Manuf. Technol.
,
89
(
1–4
), pp.
1235
1250
.10.1007/s00170-016-9103-5
3.
Saltelli
,
A.
,
Ratto
,
M.
,
Andres
,
T.
,
Campolongo
,
F.
,
Cariboni
,
J.
,
Gatelli
,
D.
,
Saisana
,
M.
, and
Tarantola
,
S.
,
2007
,
Global Sensitivity Analysis
,
The Primer, Wiley, Ltd
, Hoboken, NJ.
4.
Santner
,
T. J.
,
Williams
,
B. J.
, and
Notz
,
W. I.
,
2003
,
The Design and Analysis of Computer Experiments
,
Springer
,
New York
.
5.
Kott
,
M.
,
Kraft
,
M.
,
Emrich
,
A.
, and
Groche
,
P.
,
2021
, “
Variance Based Sensitivity Analysis of Deep Drawing Processes Based on Neural Networks Using Sobol Indices
,”
IOP Conf. Ser.: Mater. Sci. Eng.
,
1157
(
1
), p.
012089
.10.1088/1757-899X/1157/1/012089
6.
Morris
,
M. D.
,
1991
, “
Factorial Sampling Plans for Preliminary Computational Experiments
,”
Technometrics
,
33
(
2
), pp.
161
174
.10.1080/00401706.1991.10484804
7.
Sobol
,
I. M.
,
1993
, “
Sensitivity Analysis for Nonlinear Mathematical Models
,”
Math. Model. Comput. Exp.
,
1
, pp.
407
414
.
8.
Padmanabhan
,
R.
,
Oliveira
,
M.
,
Alves
,
J.
, and
Menezes
,
L.
,
2007
, “
Influence of Process Parameters on the Deep Drawing of Stainless Steel
,”
Finite Elem. Anal. Des.
,
43
(
14
), pp.
1062
1067
.10.1016/j.finel.2007.06.011
9.
Sattari
,
H.
,
Sedaghati
,
R.
, and
Ganesan
,
R.
,
2007
, “
Analysis and Design Optimization of Deep Drawing Process
,”
J. Mater. Process. Technol.
,
184
(
1–3
), pp.
84
92
.10.1016/j.jmatprotec.2006.11.008
10.
Ciuffo
,
B.
,
Casas
,
J.
,
Montanino
,
M.
,
Perarnau
,
J.
, and
Punzo
,
V.
,
2013
, “
Gaussian Process Metamodels for Sensitivity Analysis of Traffic Simulation Models
,”
Transportation Res. Rec.
,
2390
(
1
), pp.
87
98
.10.3141/2390-10
11.
Ge
,
Q.
,
Ciuffo
,
B.
, and
Menendez
,
M.
,
2015
, “
Combining Screening and Metamodel-Based Methods: An Efficient Sequential Approach for the Sensitivity Analysis of Model Outputs
,”
Reliab. Eng. Syst. Saf.
,
134
, pp.
334
344
.10.1016/j.ress.2014.08.009
12.
Fruth
,
J.
,
2015
, “
Sensitivity Analysis and Graph-Based Methods for Black-Box Functions With on Application to Sheet Metal Forming
,”
Ph.D. thesis
,
TU Dortmund University and Mines Saint-Etienne
.https://theses.hal.science/tel-01151170v1/document
13.
Lehrer
,
T.
,
Kaps
,
A.
,
Daiminger
,
T.
,
Lepenies
,
I.
,
Wagner
,
M.
, and
Duddeck
,
F.
,
2022
, “
Analysis of Global Sensitivities for One-Step and Multi-Step Deep-Drawing Simulations
,”
16th LS-DYNA Forum
, Bamberg, Germany, Oct. 11–13, pp.
117
118
.https://www.dynamore.de/de/fortbildung/konferenzen/vergangene/16-ls-dyna-forum-2022/2022-proceedings.pdf
14.
Lehrer
,
T.
,
Kaps
,
A.
,
Lepenies
,
I.
,
Wagner
,
M.
, and
Duddeck
,
F.
,
2023
, “
2S-ML: A Simulation-Based Classification and Regression Approach for Drawability Assessment in Deep Drawing
,”
Int. J. Mater. Form.
,
16
(
5
), p.
56
.10.1007/s12289-023-01770-3
15.
Lataniotis
,
C.
,
Marelli
,
S.
, and
Sudret
,
B.
,
2021
, “
Uncertainty Quantification in the Cloud With UQCloud
,”
Proceedings of the 4th International Conference on Uncertainty Quan-Tification in Computational Sciences and Engineering (UNCECOMP2021)
, Athens, Greece, June 28–30, pp.
209
217
.10.7712/120221.8033.18990
16.
van Stein
,
B.
,
Raponi
,
E.
,
Sadeghi
,
Z.
,
Bouman
,
N.
,
van Ham
,
R. C. H. J.
, and
Bäck
,
T.
,
2022
, “
A Comparison of Global Sensitivity Analysis Methods for Explainable AI With an Application in Genomic Prediction
,”
IEEE Access
,
10
, pp.
103364
103381
.10.1109/ACCESS.2022.3210175
17.
Forrester
,
A.
,
Sobester
,
A.
, and
Keane
,
A.
,
2008
,
Engineering Design Via Surrogate Modelling
,
Wiley Inc
., Hoboken, NJ.
18.
Garud
,
S. S.
,
Karimi
,
I. A.
, and
Kraft
,
M.
,
2017
, “
Design of Computer Experiments: A Review
,”
Comput. Chem. Eng.
,
106
, pp.
71
95
.10.1016/j.compchemeng.2017.05.010
19.
Niederreiter
,
H.
,
1992
,
Random Number Generation and quasi-Monte Carlo Methods
,
Siam
, Philadelphia, PA.
20.
Saltelli
,
A.
,
2002
, “
Making Best Use of Model Evaluations to Compute Sensitivity Indices
,”
Comput. Phys. Commun.
,
145
(
2
), pp.
280
297
.10.1016/S0010-4655(02)00280-1
21.
McKay
,
M. D.
,
Beckman
,
R. J.
, and
Conover
,
W. J.
,
1979
, “
Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output From a Computer Code
,”
Technometrics
,
21
(
2
), pp.
239
245
.
22.
Rajabi
,
M. M.
,
Ataie-Ashtiani
,
B.
, and
Janssen
,
H.
,
2015
, “
Efficiency Enhancement of Optimized Latin Hypercube Sampling Strategies: Application to Monte Carlo Uncertainty Analysis and Meta-Modeling
,”
Adv. Water Resour.
,
76
, pp.
127
139
.10.1016/j.advwatres.2014.12.008
23.
Morris
,
M. D.
, and
Mitchell
,
T. J.
,
1995
, “
Exploratory Designs for Computational Experiments
,”
J. Stat. Plann. Inference
,
43
(
3
), pp.
381
402
.10.1016/0378-3758(94)00035-T
24.
Ye
,
K. Q.
,
Li
,
W.
, and
Sudjianto
,
A.
,
2000
, “
Algorithmic Construction of Optimal Symmetric Latin Hypercube Designs
,”
J. Stat. Plann. Inference
,
90
(
1
), pp.
145
159
.10.1016/S0378-3758(00)00105-1
25.
Jin
,
R.
,
Chen
,
W.
, and
Sudjianto
,
A.
,
2005
, “
An Efficient Algorithm for Constructing Optimal Design of Computer Experiments
,”
J. Stat. Plann. Inference
,
134
(
1
), pp.
268
287
.10.1016/j.jspi.2004.02.014
26.
Herman
,
J.
, and
Usher
,
W.
,
2017
, “
SALib: An Open-Source Python Library for Sensitivity Analysis
,”
J. Open Source Software
,
2
(
9
), p.
97
.10.21105/joss.00097
27.
Banabic
,
D.
,
2010
,
Sheet Metal Forming Processes: Constitutive Modelling and Numerical Simulation
,
Springer
,
Berlin and Heidelberg
.
28.
Hill
,
R.
,
1948
, “
A Theory of the Yielding and Plastic Flow of Anisotropic Metals
,”
Proc. R. Soc. London. Ser. A
,
193
(
1033
), pp.
281
297
.10.1098/rspa.1948.0045
29.
Hockett
,
J. E.
, and
Sherby
,
O. D.
,
1975
, “
Large Strain Deformation of Polycrystalline Metals at Low Homologous Temperatures
,”
J. Mech. Phys. Solids
,
23
(
2
), pp.
87
98
.10.1016/0022-5096(75)90018-6
30.
Courant
,
R.
,
Friedrichs
,
K.
, and
Lewy
,
H.
,
1967
, “
On the Partial Difference Equations of Mathematical Physics
,”
IBM J. Res. Dev.
,
11
(
2
), pp.
215
234
.10.1147/rd.112.0215
31.
Wagner
,
M.
,
2019
,
Lineare Und Nichtlineare FEM
,
Springer Fachmedien Wiesbaden
,
Wiesbaden
.
32.
Neukamm
,
F.
,
Feucht
,
M.
,
Haufe
,
A.
, and
Roll
,
K.
,
2008
, “
On Closing the Constitutive Gap Between Forming and Crash Simulation
,”
10th International LS-DYNA Conference
, Dearborn, MI, June 8–10.https://www.researchgate.net/publication/312039751_On_closing_the_Constitutive_Gap_between_Forming_and_Crash_Simulation
33.
Guo
,
Y.
,
Batoz
,
J.
,
Naceur
,
H.
,
Bouabdallah
,
S.
,
Mercier
,
F.
, and
Barlet
,
O.
,
2000
, “
Recent Developments on the Analysis and Optimum Design of Sheet Metal Forming Parts Using a Simplified Inverse Approach
,”
Comput. Struct.
,
78
(
1–3
), pp.
133
148
.10.1016/S0045-7949(00)00095-X
34.
Naceur
,
H.
,
Guo
,
Y.
,
Batoz
,
J.
, and
Knopf-Lenoir
,
C.
,
2001
, “
Optimization of Drawbead Restraining Forces and Drawbead Design in Sheet Metal Forming Process
,”
Int. J. Mech. Sci.
,
43
(
10
), pp.
2407
2434
.10.1016/S0020-7403(01)00014-5
35.
Sun
,
G.
,
Li
,
G.
,
Zhou
,
S.
,
Xu
,
W.
,
Yang
,
X.
, and
Li
,
Q.
,
2011
, “
Multi-Fidelity Optimization for Sheet Metal Forming Process
,”
Struct. Multidiscip. Optim.
,
44
(
1
), pp.
111
124
.10.1007/s00158-010-0596-5
36.
Rasmussen
,
C. E.
, and
Williams
,
C. K. I.
,
2005
,
Gaussian Processes for Machine Learning
,
MIT Press Ltd
, Cambridge, MA.
37.
Pedregosa
,
F.
,
Varoquaux
,
G.
,
Gramfort
,
A.
,
Michel
,
V.
,
Thirion
,
B.
,
Grisel
,
O.
,
Blondel
,
M.
, et al.,
2011
, “
Scikit-Learn: Machine Learning in Python
,”
J. Mach. Learn. Res.
,
12
, pp.
2825
2830
.10.48550/arXiv.1201.0490
38.
Byrd
,
R. H.
,
Lu
,
P.
,
Nocedal
,
J.
, and
Zhu
,
C.
,
1995
, “
A Limited Memory Algorithm for Bound Constrained Optimization
,”
SIAM J. Sci. Comput.
,
16
(
5
), pp.
1190
1208
.10.1137/0916069
39.
Toal
,
D. J. J.
,
Bressloff
,
N. W.
, and
Keane
,
A. J.
,
2008
, “
Kriging Hyperparameter Tuning Strategies
,”
AIAA J.
,
46
(
5
), pp.
1240
1252
.10.2514/1.34822
40.
Krige
,
D. G.
,
1951
, “
A Statistical Approach to Some Basic Mine Valuation Problems on the Witwatersrand
,”
J. South Afr. Inst. Min. Metall.
,
52
(
6
), pp.
119
139
.https://hdl.handle.net/10520/AJA0038223X_4792
41.
Matheron
,
G.
,
1963
, “
Principles of Geostatistics
,”
Econ. Geol.
,
58
(
8
), pp.
1246
1266
.10.2113/gsecongeo.58.8.1246
42.
Sacks
,
J.
,
Welch
,
W. J.
,
Mitchell
,
T. J.
, and
Wynn
,
H. P.
,
1989
, “
Design and Analysis of Computer Experiments
,”
Stat. Sci.
,
4
(
4
), pp.
409
423
.10.1214/ss/1177012413
43.
Wolpert
,
D. H.
,
1992
, “
Stacked Generalization
,”
Neural Networks
,
5
(
2
), pp.
241
259
.10.1016/S0893-6080(05)80023-1
44.
Breiman
,
L.
,
1996
, “
Bagging Predictors
,”
Mach. Learn.
,
24
(
2
), pp.
123
140
.10.1007/BF00058655
45.
Wolpert
,
D. H.
, and
Macready
,
W. G.
,
1996
, “
Combining Stacking With Bagging to Improve a Learning Algorithm
,” Semantic Scholar, Seattle, WA, accessed Apr. 20, 2023, https://api.semanticscholar.org/CorpusID:10422484
46.
Zhou
,
Z.-H.
,
2012
,
Ensemble Methods: Foundations and Algorithms
,
Chapman & Hall/CRC Machine Learning & Pattern Recognition, CRC Press
,
Hoboken, NJ
.
47.
Mendes-Moreira
,
J.
,
Soares
,
C.
,
Jorge
,
A. M.
, and
de Sousa
,
J. F.
,
2012
, “
Ensemble Approaches for Regression
,”
ACM Comput. Surv.
,
45
(
1
), pp.
1
40
.10.1145/2379776.2379786
48.
Breiman
,
L.
,
1996
, “
Stacked Regressions
,”
Mach. Learn.
,
24
(
1
), pp.
49
64
.10.1007/BF00117832
49.
van Stein
,
B.
, and
Raponi
,
E.
,
2022
, “
GSAreport: Easy to Use Global Sensitivity Reporting
,”
J. Open Source Software
,
7
(
78
), p.
4721
.10.21105/joss.04721
50.
Marelli
,
S.
, and
Sudret
,
B.
,
2014
, “
UQLab: A Framework for Uncertainty Quantification in Matlab
,”
Vulnerability, Uncertainty, and Risk
,
M.
Beer
,
S.-K.
Au
, and
J. W.
Hall
, eds.,
American Society of Civil Engineers
,
Reston, VA
, pp.
2554
2563
.