## Abstract

Reliability analysis is a core element in engineering design and can be performed with physical models (limit-state functions). Reliability analysis becomes computationally expensive when the dimensionality of input random variables is high. This work develops a high-dimensional reliability analysis method through a new dimension reduction strategy so that the contributions of unimportant input variables are also accommodated after dimension reduction. Dimension reduction is performed with the first iteration of the first-order reliability method (FORM), which identifies important and unimportant input variables. Then a higher order reliability analysis is performed in the reduced space of only important input variables. The reliability obtained in the reduced space is then integrated with the contributions of unimportant input variables, resulting in the final reliability prediction that accounts for both types of input variables. Consequently, the new reliability method is more accurate than the traditional method which fixes unimportant input variables at their means. The accuracy is demonstrated by three examples.

## 1 Introduction

*Y*is a response that indicates the occurrence of a failure.

Physics-based reliability methods can be divided into three categories: numerical methods [1–5], surrogate methods [6–11], and simulation methods [12–15]. Typically, numerical methods simplify the limit-state function using the first- or second-order Taylor expansion. The reliability is approximated by the simplified function. The surrogate methods construct an easy-access model utilizing sensitivity analysis, Design of Experiments (DOE), active learning methods, etc., and the reliability obtained by calling the surrogate model instead of the original limit-state function. However, both numerical and surrogate methods suffer from the curse of dimensionality that makes reliability analysis computationally expensive for high-dimensional problems. Because reliability prediction repeatedly calls limit-state functions which are typically complex, resource-intensive numerical models. The number of function calls (FC) grows drastically as the increase of dimensionality of the input variables. Although the efficiency of simulation methods, such as Monte Carlo Simulation (MCS) [16] and Importance Sampling (IS) method [17], is not affected by the dimensionality, they are still computationally expensive when the reliability is high and may not be practically used in engineering design.

High-dimensional reliability analysis is encountered in many engineering and science fields [18–23]. Current high-dimensional reliability analysis methods are roughly classified into three types. The first type [24–27] uses high-dimensional model representation (HDMR) to decompose a high-dimensional limit-state function $g(X)$ into the sum of several lower-dimensional functions. The moments (means, variance, etc.) of the response can be approximated by several low-dimensional numerical integrations. However, the accuracy of the reliability obtained by HDMR may not be accurate enough if the interaction terms are dominant. The low-dimensional functions are usually approximated by Taylor expansion, which also could introduce errors. Although the accuracy of the reliability assessment can be improved by increasing the approximation order, the number of function evaluations may increase drastically. Several recent studies [28–30] combine adaptive metamodeling approaches (Polynomial Chaos Expansion (PCE), Kriging) and statistical model selection methods. Their goal is to find the optimal integration points or training points for metamodeling. The balance between the prediction accuracy and efficiency is still a challenge.

The second type of method [31–34] combines dimension reduction with surrogate modeling and machine learning. Three steps are usually involved. Step 1 is the dimension reduction performed by the sliced inverse regression [34,35], or other methods [24,33] at specific training points, usually generated through DOE [36]. Important input variables are identified. In Step 2, a surrogate model is constructed with respect to important input variables in the reduced dimensional space. Many regression and machine learning methods could be used for this purpose, including PCE [37], Gaussian Process Regression [38], support vector machines [39], and neural networks [32]. Step 3 is the surrogate model validation. After the accuracy of the surrogate model is validated by MCS, it is used to estimate the reliability. Sufficient training points are needed to ensure good accuracy of the surrogate model. The number of training points, thereby the number of function calls, increases greatly with the increase of dimensionality of input variables.

The third most commonly used method is principal component analysis (PCA) [40,41]. PCA reduces the dimension of the input variables by making use of the correlations between the input variables. Therefore, PCA works well for the elements of input variables that are strongly correlated. When the input variables are independent or only weakly correlated, PCA may not work well for dimension reduction. Besides, PCA does not use the information of the response *Y*, and it is, therefore, an unsupervised dimension reduction technique. Although dimension reduction is optimal in the given data space, it may be suboptimal for the entire regression space.

Overall, despite the progress, numerous challenges remain in the path toward routinely accommodating high-dimensional problems in reliability analysis. In most of the successful applications, only dozens of random input variables can be practically handled except the special cases involving functional data [31,37]. However, the dimension in input variables could easily add up to hundreds or thousands in system design. For example, the aircraft wing optimization design [42] involves structural mechanics and aerodynamics. The number of design variables, random variables, and constraints could be in hundreds or thousands. Moreover, when the reliability requirement is high, accurately predicting the reliability is extremely computational demanding.

In real engineering applications, not all the elements of $X$ contribute significantly to the response *Y*. The majority elements of $X$ may have insignificant effects that are therefore unimportant variables. Their total effect, however, may not be negligible because the unimportant variables may count for most of $X$. Traditional dimension reduction methods usually neglect the contribution of the unimportant variables because they are fixed at their means, which can lead to an error.

In this study, we account for the total effect of unimportant variables by fixing them at their percentiles so that the dimension is reduced but the influence of unimportant variables is not neglected. The proposed method does not require random sampling for dimension reduction; instead, it bases on a numerical method, specifically the first-order reliability method (FORM). After dimension reduction, any reliability method with higher accuracy can be used to predict the reliability since the computational effort will be reduced significantly in the reduced space. Then the predicted reliability is integrated with the contribution of the unimportant variables to produce the final reliability prediction.

## 2 Review

In this section, we briefly review the basic knowledge that is related to the proposed method, including FORM, the second-order reliability method (SORM), and the second-order saddlepoint approximation (SOSPA). The rules of symbols in this paper are (1) a capitalized letter in bold denotes a vector of random variables (e.g., $X$ or $U$), (2) an italicized lowercase letter in bold denotes a vector of deterministic variables (e.g., ** x** or

**), (3) an italicized capital letter denotes a random variable (e.g.,**

*u**X*or

*U*), and (4) an italicized lowercase letter of denotes a deterministic variable (e.g.,

*x*or

*u*).

### 2.1 FORM and SORM.

*p*

_{f}is then given by

*f*

_{X}(

**) is the joint probability density function (PDF) of $X$. The limit-state function $g(X)$ is usually a nonlinear function. In this study, we assume all the elements in $X$ are independent. Directly integrating the PDF in the failure region $(g(X)<0)$ is often impractical and computationally expensive. It is the reason that many approximation methods have been developed, including FORM [1] and SORM [3], where three steps are involved.**

*x*- Transform $X$ to be the standard normal variables $U$ bywhere $FXi(\u22c5)$ and Φ(·) represent the cumulative density function (CDF) of(4)$FXi(Xi)=\Phi (Ui)$
*X*_{i}and*U*_{i}, respectively. Denote the transformation by $X=T(U)$, and Eq. (3) is rewritten aswhere(5)$Pr{g(X)<0}=\u222bg(T(U))<0fU(T(u))<0du$*f*_{U}(·) is the joint PDF of $U$. - Find the most probable point (MPP) which is a point with the highest PDF on the surface of $g(U)=0$. Geometrically, MPP has the shortest distance from the surface to the origin in U-space, and then MPP $(u*)$ is found bywhere $\Vert \u22c5\Vert $ stands for the length of a vector. $\beta =\Vert u*\Vert $ is the reliability index because it is related to the probability of failure as will be shown in Eq. (9).(6)${minu\beta =\Vert u\Vert subjecttog(U)=0$
- Approximate the limit-state function linearly (FORM) or quadratically (SORM) at $u*$. The use of $u*$ can minimize the error of the approximation. The two approximations are given by(7)$g(U)\u2248g(u*)+\u2207g(u*)T(U\u2212u*)$where $\u2207g(u*)$ and $H(u*)$ are the gradient and the Hessian matrix of $g(T(U))$ with respect to $u*$, respectively.(8)$g(U)\u2248g(u*)+\u2207g(u*)T(U\u2212u*)+12(U\u2212u*)TH(u*)(U\u2212u*)$After the three steps, the probability of failure calculated by FORM is given by(9)$pf=\Phi (\u2212\beta )$
As mentioned previously,

*β*is called the reliability index. When FORM is used,*β*also is the magnitude of the MPP as indicated in Eq. (6). Therefore, we call*β*from FORM the FORM-reliability index throughout the paper. The solution from SORM is more accurate in general and is obtained by multiplying Eq. (9) with a correction term [3].

### 2.2 SOSPA.

*K*

_{Y}(

*t*), which can be derived analytically from the approximated response in Eq. (8). Once

*K*

_{Y}(

*t*) is available, the saddlepoint

*t*

_{s}is obtained by solving

*p*

_{f}is computed by [46]

*ϕ*(·) represents the PDF of the standard normal distribution

*t*

_{s}is positive, negative, or zero; $Kg\u2033(ts)$ is the second-order derivative of the CGF with respect to

*t*.

## 3 Methodology

The distinctive strategy of the proposed method is to use an accurate reliability method in the reduced space and accounts for the contributions of both important and unimportant input variables to the reliability.

### 3.1 Overview.

The purpose of dimension reduction is to identify important and unimportant variables in $X$. We will use FORM to perform dimension reduction since the MPP from FORM can directly measure the importance of input variables for two reasons. First, the reliability is determined by the FORM-reliability index or the magnitude of the MPP since $\beta =\Vert u\u2217\Vert =\u2211i=1n(ui*)2$; second, the components of the MPP $u*=(ui*)i=1,n$ determine the importance of the elements of $X$ or their contributions to the reliability. As shown in Fig. 1, a farther distance from the mean (or median) means a larger value of the MPP component and therefore a higher contribution. Hence, we can use the MPP components to identify both important and unimportant input variables. Since the MPP components of the unimportant input variables do not change significantly during the MPP search, we propose to use the MPP obtained from the first iteration of the MPP search. This can greatly reduce the computational effort.

Once the MPP is obtained from the first iteration, important and unimportant input variables are identified by their MPP components. Then, the subsequent analysis will be conducted with only important variables. A reliability method with higher accuracy can be used with the unimportant input variables fixed at their MPP components. Using a high accurate reliability method is affordable because the number of function calls can be reduced in the reduced space. Then the final reliability is obtained by integrating the reliability obtained in the reduced space and the FORM-reliability index of unimportant input variables.

The proposed method involves three steps: (1) dimension reduction, (2) reliability analysis in the reduced space, and (3) reliability analysis in the original space.

### 3.2 Dimension Reduction.

*u*

_{1i}is the

*i*th component of $u1$. The magnitudes of the components of $u1$, therefore, indicate their importance to the probability of failure. More specifically, we examine the sensitivity of

*p*

_{f}with respect to the components of $u1$. The sensitivity is defined by

*φ*(−

*β*

_{1}) is a constant in Eq. (18),

*u*

_{1i}/

*β*

_{1}indicates the relative importance of each component. We can therefore use the following indicator to identify unimportant input random variables:

If *c*_{i} is less than a threshold *c*_{thres}, *X*_{i} is considered unimportant. The higher is the threshold, the more input random variables will be classified as unimportant ones, and the higher dimensions will be reduced. Using different thresholds, a user can know how many important variables will be included for the subsequent accurate reliability analysis. The user will then be able to determine an appropriate threshold given his or her computational budget. Based on our experience from the test problems, we recommend that the user could start from $cthres=3%$ or $5%$ when searching for a suitable threshold.

*n*, respectively. Then the input variables are partitioned into two parts

*β*

_{1}be the FORM-reliability index of the important and unimportant portion of $u1$, respectively, which are denoted by

*β*

_{1}in the final stage of the reliability analysis. Then the limit-state function becomes a function of $U\xaf$ with reduced dimension. The new function is given by

For brevity, we denote the limit-state function as $G(U\xaf)$.

### 3.3 Reliability Analysis in the Reduced Space.

We next perform reliability analysis in the reduced dimensional space ($U\xaf$ space). Once the dimension is reduced, the reliability can be solved either by numerical methods (FORM, SORM, SOSPA, etc.) or surrogate methods (Kriging, PCE, Machine Learning, etc.).

*K*

_{G}(

*t*) of $G(U\xaf)$ is derived analytically by Eq. (28). The detailed derivations can be found in Ref. [43]. The saddlepoint

*t*

_{S}is obtained by solving $KG\u2032(t)=0$. The probability of failure of $G(U\xaf)$ is calculated by Eq. (11), whose solution is denoted by $p\xaff$. The reliability index from SOSPA then is given by

If all the derivatives are evaluated by the finite difference method, the number of function evaluations with respect to the dimension of $U\xaf$ is $k(n\xaf+1)+1/2n\xaf(n\xaf+1)$, where *k* is the number of iterations of the MPP search.

### 3.4 Final Reliability Analysis.

In Step 2, we also perform the MPP search in the reduced space with unimportant variables fixed at $u_1$. This produces the MPP $u\xafG*$ and FORM-reliability index $\beta \xafG=\Vert u\xafG*\Vert $. Next, we prove that $u\xafG*=u\xaf*$, and therefore $\beta \xaf=\beta \xafG$. Then we can use Eq. (31) to integrate the results in Steps 1 and 2.

Assume that the MPPs of $g(T(U\xaf;U_))$ and $G(U\xaf)$ are unique, in other words, $u*=(u\xaf*;u_*)$ and $u\xafG*$ are unique.

*β*. We now replace the FORM-reliability index $\beta \xafG$ with the more accurate reliability index $\beta \xafG,SPA$ in Eq. (29), and then we obtain the final reliability index

### 3.5 Numerical Procedure.

The numerical procedure of the proposed high-dimensional reliability analysis method is summarized below.

Dimension reduction: Perform one-iteration FORM to obtain one-step MPP $u1$; identify the important and unimportant random variables by

*u*_{1i}≤*c*and partition input variables the corresponding MPP as $U=(U\xaf;U_)$ and $u1=(u\xaf1;u_1)$; then calculate FORM-reliability index $\beta _=\Vert u_1\Vert $; by fixing the unimportant variables $U_$ at $u_1$, a new limit-state function $G(U\xaf)=g(T(U\xaf;u_1))$ is obtained with reduced dimension.Reliability analysis in $U\xaf$ space: Use an accurate reliability method such as SOSPA to find the probability of failure $p\xaff$ based on $G(U\xaf)$ and calculate the corresponding reliability index, which is $\beta \xafG,SPA$ if SOSPA is used.

Final reliability analysis: Calculate the final reliability index by $\beta overall=\beta \xafG,SPA2+\beta _2$ and the final probability of failure by

*p*_{f,overall}= Φ(−*β*_{overall}).

## 4 Examples

In this section, we use three examples to demonstrate the proposed method. Example 1 is a mathematical problem with all the input variables normally distributed. It is presented step by step to show all the details of the proposed method so that an interested reader can easily repeat the process and reproduce the result. Example 2 involves a cantilever beam with over 200 random variables, some of which follow non-normal distributions. Example 3 shows a truss system with 52 bars and 110 random variables, some of which follow extreme value distributions, and the limit-state function is a black-box function. For all the examples, we use the same threshold value $cthres=3%$ to divide the input variables into important and unimportant variables.

*n*univariate functions and then create surrogate models for all univariate functions with three and five points; after this, the reliability is calculated by SOSPA based on the surrogate models. The two HDMR methods are denoted by HDMR-3-SOSPA and HDMR-5-SOSPA. DR-SOSPA is the proposed method that uses SOSPA in the reduced dimensional space and accounts for the effects of eliminated variables. To evaluate the advantage of accounting for the effects of eliminated variables, we also compare DR-SOSPA with the method that uses SOSPA in the reduced dimensional space, but the eliminated variables are fixed at their means. We denoted the latter method DR-SOSPA-M. The result of MCS is served as a reference for accuracy comparison, and the relative error of a non-MCS method with respect to MCS is defined by

*p*

_{f}and

*p*

_{f,MCS}are the probabilities of failure obtained by non-MCS and MCS, respectively. The number of FC and the coefficient of efficiency (CoE) are used to measure the efficiency. The latter is defined by

### 4.1 A Mathematical Problem.

The mathematical problem is a parabolic function given by

*U*

_{i},

*i*= 1, 2, …, 100 are all independent standard normal random variables, namely, $Ui\u223cN(0,12)$,

*k*

_{i}is the coefficient of a linear term,

*k*

_{i}= 0.08 for

*i*= 6, 7, …, 100.

*u*

_{1i}|/

*β*>

*c*

_{thres}to identify important variables, we find five important variables that are $U\xaf=(U1,U2,U3,U4,U5)T$. The unimportant variables are $U_=(U6,U7,\u2026,U100)T$. Then $u1$ is partitioned into $(u\xaf1;u_1)$, accordingly. The reliability index of unimportant variables is given by $\beta _=\Vert u_1\Vert =0.3419$. It represents the contribution of the unimportant variables to the reliability. Then, we fix $U_$ at $u_1$ and have

Thus, the dimension is reduced to 5 from 100.

Next, we conduct reliability analysis in the $U\xaf$ space. We first perform the MPP search for $G(U\xaf)$, which results in the MPP $u\xafG\u2217=(1.1770,1.1770,1.1770,1.1770,1.1770)T$. We then calculate the Hessian matrix of $G(U\xaf)$ at $u\xafG\u2217$. Using SOSPA, we have the probability of failure that is $p\xaff=6.7352\xd710\u22123$. Then the reliability index of the important variables is obtained that is $\beta \xafG,SPA=2.4711$. The total reliability index, which accommodates both important and unimportant variables, is calculated by $\beta overall=\beta \xafG,SPA2+\beta _2=2.4946$. The final probability of failure is given by *p*_{f,overall} = Φ(−*β*_{overall}) = 6.3044 × 10^{−3}. The results of all the methods are summarized in Table 1.

Methods | p_{f} | Error (%) | FC | CoE |
---|---|---|---|---|

MCS | 6.3416 × 10^{−3} | – | 1e7 | 10^{5} |

FORM | 3.9966 × 10^{−3} | 36.98 | 404 | 4.04 |

SOSPA | 6.3515 × 10^{−3} | 0.16 | 5555 | 55.55 |

DR-SOSPA-M | 6.1501 × 10^{−3} | 3.02 | 146 | 1.46 |

HDMR-3-SOSPA | 1.792 × 10^{−3} | 71.7 | 201 | 2.01 |

HDMR-5-SOSPA | 1 | – | 401 | 4.01 |

DR-SOSPA | 6.3044 × 10^{−3} | 0.59 | 146 | 1.46 |

Methods | p_{f} | Error (%) | FC | CoE |
---|---|---|---|---|

MCS | 6.3416 × 10^{−3} | – | 1e7 | 10^{5} |

FORM | 3.9966 × 10^{−3} | 36.98 | 404 | 4.04 |

SOSPA | 6.3515 × 10^{−3} | 0.16 | 5555 | 55.55 |

DR-SOSPA-M | 6.1501 × 10^{−3} | 3.02 | 146 | 1.46 |

HDMR-3-SOSPA | 1.792 × 10^{−3} | 71.7 | 201 | 2.01 |

HDMR-5-SOSPA | 1 | – | 401 | 4.01 |

DR-SOSPA | 6.3044 × 10^{−3} | 0.59 | 146 | 1.46 |

As shown in Table 1, SOSPA, DR-SOSPA, and DR-SOSPA-M accurately predict the probability of failure. Compared with the results of SOSPA with 5555 function calls and an error of 0.16%, the proposed method needs 146 function calls and CoE = 1.46, only increasing the error to 0.59%. Although DR-SOSPA-M maintains the same efficiency as the proposed method, the accuracy of DR-SOSPA-M is worse than DR-SOSPA because it ignores the joint influence of the unimportant variables. FORM does not produce an accurate result. The two HDMR methods cannot produce accurate results for this example either. To find the cause of inaccuracy, we perform MCS directly using the surrogate models from HDMR instead of SOSPA and obtain almost the same results as those of HDMR-SOSPA. This indicates that the surrogate models from HDMR are not accurate. The Hessian matrixes of the surrogate models are significantly different from those of the original limit-state function.

### 4.2 A Cantilever Beam.

A cantilever is shown in Fig. 2. It is subjected to 106 random forces on the top surface, in which six of them (*F*_{i}, *i* = 1, 2, …, 6) are lognormally distributed and the rest (*F*_{i}, *i* = 7, 8, …, 106) follow normal distributions. The locations of the forces are random variables that are normally distributed, which are denoted by $lFi,i=1,2,\u2026,106$. The width *w*, height *h*, and yield strength *S*_{y} are normally distributed. All the random variables are independent. The distributions are shown in Table 2.

Random variables | Distribution | Mean | Standard deviation |
---|---|---|---|

S_{y}(MPa) | Normal | 720 | 60 |

w(m) | Normal | 0.2 | 0.001 |

h(m) | Normal | 0.4 | 0.001 |

F_{i}, i = 1, 2, …, 6(kN) | Lognormal | 30 + 5i | $2.4+0.4i$ |

$lFi,i=1,2,\u2026,6(m)$ | Normal | 4.3 + 0.1i | 0.01 |

F_{i}, i = 7, 8, …, 106(kN) | Normal | 10 | 1 |

$lFi,i=7,8,\u2026,106(m)$ | Normal | 0.02i | 0.01 |

Random variables | Distribution | Mean | Standard deviation |
---|---|---|---|

S_{y}(MPa) | Normal | 720 | 60 |

w(m) | Normal | 0.2 | 0.001 |

h(m) | Normal | 0.4 | 0.001 |

F_{i}, i = 1, 2, …, 6(kN) | Lognormal | 30 + 5i | $2.4+0.4i$ |

$lFi,i=1,2,\u2026,6(m)$ | Normal | 4.3 + 0.1i | 0.01 |

F_{i}, i = 7, 8, …, 106(kN) | Normal | 10 | 1 |

$lFi,i=7,8,\u2026,106(m)$ | Normal | 0.02i | 0.01 |

We first perform the one-iteration FORM to obtain the first-step MPP $u1$. Using $cthres=3%$, we obtain nine important variables $U\xaf=(Sy,w,h,F1,F2,\u2026,F6)T$ and the reliability index of unimportant variables *β* = 0.1666. By performing reliability analysis in $U\xaf$ space using SOSPA, we have $p\xaff=1.9481\xd710\u22126$ and the corresponding reliability index is $\beta \xafG,SPA=4.6168$. The total reliability index, which accommodates both important and unimportant variables, is calculated by $\beta overall=\beta \xafG,SPA2+\beta _2=4.6199$. The probability of failure for the original limit-state function is given by *p*_{f,overall} = Φ(−*β*_{overall}) = 1.9201 × 10^{−6}. The results are summarized in Table 3.

Methods | p_{f} | Error (%) | FC | CoE |
---|---|---|---|---|

MCS | 1.9106 × 10^{−6} | – | 1.6 × 10^{9} | $7.4\xd7106$ |

FORM | 1.7964 × 10^{−6} | 6.0 | 648 | 3.0 |

SOSPA | 1.9200 × 10^{−6} | 0.5 | 24,084 | 112.0 |

DR-SOSPA-M | 1.8926 × 10^{−6} | 1.0 | 301 | 1.4 |

HDMR-3-SOSPA | 1.8158 × 10^{−6} | 5.0 | 431 | 2.0 |

HDMR-5-SOSPA | 3.4526 × 10^{−6} | 80.7 | 861 | 4.0 |

DR-SOSPA | 1.9201 × 10^{−6} | 0.5 | 301 | 1.4 |

Methods | p_{f} | Error (%) | FC | CoE |
---|---|---|---|---|

MCS | 1.9106 × 10^{−6} | – | 1.6 × 10^{9} | $7.4\xd7106$ |

FORM | 1.7964 × 10^{−6} | 6.0 | 648 | 3.0 |

SOSPA | 1.9200 × 10^{−6} | 0.5 | 24,084 | 112.0 |

DR-SOSPA-M | 1.8926 × 10^{−6} | 1.0 | 301 | 1.4 |

HDMR-3-SOSPA | 1.8158 × 10^{−6} | 5.0 | 431 | 2.0 |

HDMR-5-SOSPA | 3.4526 × 10^{−6} | 80.7 | 861 | 4.0 |

DR-SOSPA | 1.9201 × 10^{−6} | 0.5 | 301 | 1.4 |

As the results indicate, FORM is the least accurate although it is efficient. SOSPA has an error of 0.5%, but its efficiency is the worst with 24,084 function calls and CoE = 112. DR-SOSPA outperforms other methods with the same accuracy (0.5%) as SOSPA and the highest efficiency (FC = 301 and CoE = 1.4).

### 4.3 A Truss System.

This example is modified from Ref. [48]. The dome truss system consists of 52 bars with 21 nodes, as shown in Fig. 3. The truss structure is similar to the roof of a stadium. To distinguish the difference between nodes and bars, the numbers with a dot mean nodes and the numbers without dot denote bars. All the nodes lie on the imaginary hemisphere with a radius of 240 in. The Young's moduli and the cross-sectional areas of bars follow normal distributions. The structure is subjected to six random forces at nodes 1–13, where *F*_{1} is applied to node 1, *F*_{2} is applied to nodes 2 and 4, *F*_{3} is applied to nodes 3 and 5, *F*_{4} is applied to nodes 6 and 10, *F*_{5} is applied to nodes 8 and 12, and *F*_{6} is applied to nodes 7, 9, 11, and 13. The directions of all the forces point to the center of the imaginary hemisphere. All the random variables are independent and their distributions are shown in Table 4.

Random variables | Distribution | Mean | Standard deviation |
---|---|---|---|

$Ei,i=1\u223c50(ksi)$ | Normal | 2.5 × 10^{4} | $1000$ |

$Ai,i=1\u223c8,and29\u223c36(in2)$ | Normal | 2 | $0.001$ |

$Ai,i=9\u223c16(in2)$ | Normal | 1.2 | $0.0006$ |

$Ai,i=17\u223c28,and37\u223c52(in2)$ | Normal | 0.6 | $0.0003$ |

F_{1}(kip) | Normal | 45 | 3.6 |

F_{2}(kip) | Extreme | 40 | 6.0 |

F_{3}(kip) | Extreme | 35 | 5.25 |

F_{4}(kip) | Normal | 30 | 4.5 |

F_{5}(kip) | Normal | 25 | 3.75 |

F_{6}(kip) | Normal | 20 | 3 |

Random variables | Distribution | Mean | Standard deviation |
---|---|---|---|

$Ei,i=1\u223c50(ksi)$ | Normal | 2.5 × 10^{4} | $1000$ |

$Ai,i=1\u223c8,and29\u223c36(in2)$ | Normal | 2 | $0.001$ |

$Ai,i=9\u223c16(in2)$ | Normal | 1.2 | $0.0006$ |

$Ai,i=17\u223c28,and37\u223c52(in2)$ | Normal | 0.6 | $0.0003$ |

F_{1}(kip) | Normal | 45 | 3.6 |

F_{2}(kip) | Extreme | 40 | 6.0 |

F_{3}(kip) | Extreme | 35 | 5.25 |

F_{4}(kip) | Normal | 30 | 4.5 |

F_{5}(kip) | Normal | 25 | 3.75 |

F_{6}(kip) | Normal | 20 | 3 |

*δ*

_{0}is the threshold displacement of node 1. A failure occurs when the displacement of node 1 exceeds

*δ*

_{0}= 0.7 in. $E=[E1,E2,\u2026,E52]T$ and $A=[A1,A2,\u2026,A52]T$ are vectors of the Young's moduli and cross-sectional areas, respectively. $F=[F1,F2,\u2026,F6]T$ is the vector of the loads.

Following the procedure in Sec. 3.5, we obtain the one-iteration MPP. Nine variables are identified as important variables by setting $cthres=3%$, which are [*F*_{1}, …, *F*_{5}, *E*_{1}, …, *E*_{4}]^{T}. Then, the probability of failure is obtained by integrating the influence of important and unimportant variables. The results are summarized in Table 5. FORM produces a large error. SOSPA produces the most accurate result, but its efficiency is poor as it needs 6660 function calls with CoE = 60.54. The error of DR-SOSPA is 2.29%, which is smaller than the error of DR-SOSPA-M and is larger than SOSPA, and its computational burden is relieved significantly with only 206 function calls and CoE = 1.87. The proposed method DR-SOSPA is better than HDMR-SOSPA both in accuracy and efficiency.

Methods | p_{f} | Error (%) | FCs | CoE |
---|---|---|---|---|

MCS | 5.10 × 10^{−3} | – | 10^{7} | $9.09\xd7104$ |

FORM | 5.7678 × 10^{−3} | 13.09 | 444 | 4.03 |

SOSPA | 5.0481 × 10^{−3} | 1.02 | 6660 | 60.54 |

DR-SOSPA-M | 4.8532 × 10^{−3} | 4.84 | 179 | 1.63 |

HDMR-3-SOSPA | 4.3053 × 10^{−3} | 15.6 | 221 | 2.01 |

HDMR-5-SOSPA | 4.6776 × 10^{−3} | 8.3 | 441 | 4.01 |

DR-SOSPA | 4.9833 × 10^{−3} | 2.29 | 206 | 1.87 |

Methods | p_{f} | Error (%) | FCs | CoE |
---|---|---|---|---|

MCS | 5.10 × 10^{−3} | – | 10^{7} | $9.09\xd7104$ |

FORM | 5.7678 × 10^{−3} | 13.09 | 444 | 4.03 |

SOSPA | 5.0481 × 10^{−3} | 1.02 | 6660 | 60.54 |

DR-SOSPA-M | 4.8532 × 10^{−3} | 4.84 | 179 | 1.63 |

HDMR-3-SOSPA | 4.3053 × 10^{−3} | 15.6 | 221 | 2.01 |

HDMR-5-SOSPA | 4.6776 × 10^{−3} | 8.3 | 441 | 4.01 |

DR-SOSPA | 4.9833 × 10^{−3} | 2.29 | 206 | 1.87 |

We also modify this example to examine a case with a large probability of failure by reducing the threshold value *δ*_{0} in Eq. (50) to 0.5 in. The threshold is still 3% and nine variables are important. The results show that the proposed method is effective for a large probability of failure problems as well (Table 6).

Methods | p_{f} | Error (%) | FCs | CoE |
---|---|---|---|---|

MCS | 0.2781 | – | 10^{5} | $909$ |

FORM | 0.2978 | 7.10 | 333 | 3.03 |

SOSPA | 0.2763 | 0.65 | 6549 | 59.54 |

DR-SOSPA-M | 0.2756 | 0.90 | 196 | 1.78 |

HDMR-3-SOSPA | 0.2669 | 4.02 | 221 | 2.01 |

HDMR-5-SOSPA | 0.4730 | 70.1 | 441 | 4.01 |

DR-SOSPA | 0.2758 | 0.84 | 196 | 1.78 |

Methods | p_{f} | Error (%) | FCs | CoE |
---|---|---|---|---|

MCS | 0.2781 | – | 10^{5} | $909$ |

FORM | 0.2978 | 7.10 | 333 | 3.03 |

SOSPA | 0.2763 | 0.65 | 6549 | 59.54 |

DR-SOSPA-M | 0.2756 | 0.90 | 196 | 1.78 |

HDMR-3-SOSPA | 0.2669 | 4.02 | 221 | 2.01 |

HDMR-5-SOSPA | 0.4730 | 70.1 | 441 | 4.01 |

DR-SOSPA | 0.2758 | 0.84 | 196 | 1.78 |

The main computer code of the truss example can be found in Supplemental Material A on the ASME Digital Collection. Interested readers can test the proposed method or other methods based on the code using the truss example.

## 5 Conclusions

The proposed method partitions the input random variables into two parts, important and unimportant variables, which is achieved by using the information from the first iteration of FORM. With the unimportant random variables fixed at their percentile values obtained from one-iteration FORM, the dimension is reduced to the dimension of important input random variables. Then the probability of failure is found by an accurate reliability method in the reduced space. The final probability of failure is obtained by integrating the probability of failure in the reduced space and the contributions of unimportant variables. Hence, the dimension is reduced, and the contributions of all input variables are also accommodated, resulting in high accuracy and efficiency of high-dimensional reliability analysis.

The proposed method works better if fewer important input variables are important. It cannot effectively reduce the dimension, however, when all input variables are important. If the dimension is not reduced, the proposed dimension reduction strategy will not affect the performance of the method used in the second step (the high accurate reliability method in the reduced space in Sec. 3.5). In this case, one may use other dimension reduction methods that can reduce the dimension of the linear combinations of the original input variables. Another limitation is that the proposed method may not be accurate for highly nonlinear problems since the one-iteration MPP may not be accurate to identify the real importance of random variables. More iterations of the MPP search may be helpful in finding the real importance of the variables, but the efficiency will deteriorate.

Our future work will improve the proposed method when most of the input variables are important. We will also study the possibility of applying the proposed method to reliability-based design optimization.

## Acknowledgment

We would like to acknowledge the support from the National Science Foundation under Grant No. 1923799.

## Conflict of Interest

The authors declare that they have no conflicts of interest.

## Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.