Skip to main content

Probabilistic analysis of long-term loss incorporating maximum entropy method and analytical higher-order moments

Abstract

Quantifying economic losses of civil infrastructures subjected to various hazards under a life-cycle context is of vital importance for risk assessment and management. In previous studies, the expected long-term loss has been widely applied as a standard decision criterion during the life-cycle analysis. However, the expectation may not be informative enough to illustrate uncertainties associated with the long-term loss. Therefore, the higher-order moments and the probability distribution should be investigated. In this paper, a probabilistic analysis framework is proposed to construct the probability density function and cumulative distribution function of long-term loss by assessing the analytical statistical moments. The stochastic renewal process is utilized to assess the long-term loss by considering uncertainties associated with stochastic occurrence and frequency of the hazards. Based on the maximum entropy method, the proposed approach shows superior efficiency to assess the probability distribution of long-term loss than crude Monte Carlo simulation. The probability distribution can be essential information for decision-making process of risk management. An illustrative example is investigated to show the probability density function of long-term loss of civil infrastructure subjected to hurricane hazards. A good agreement of results obtained by the proposed approach and Monte Carlo simulation has verified the accuracy and effectiveness of the proposed method.

Introduction

In recent decades, substantial financial and social losses caused by natural hazards have raised the awareness of the public to risk management of civil infrastructures. Life-cycle performance and risk assessment of engineering structures have been key concerns of decision-makers to mitigate the potential risk under hazards. Researchers have paid special attention to long-term loss estimation and risk management of civil infrastructure subjected to various natural hazards, such as earthquakes, hurricanes, and flooding [1,2,3]. In this context, the long-term loss refers to the cumulative financial cost of the civil infrastructure over the service life due to damage under hazards. The long-term loss assessment is also known as the life-cycle risk assessment. Though the expected long-term loss has been widely utilized as a standard decision criterion, researchers have stated that the expectation is not fully informative and moved away from expected losses to improve decision-making. For instance, Goda and Hong [4] proposed an optimal seismic design framework by considering different risk attitudes of decision-makers based on stochastic dominance criteria. The cumulative prospect theory was incorporated to aid decision-making by considering probability distributions associated with hazard risk [5, 6]. Furthermore, more studies have started to investigate the statistical parameters of the long-term loss by studying the standard deviation and higher-order moments. For instance, Pandey and Van der Weide proposed an analytical renewal model to assess the life-cycle cost considering the expectation and standard deviation [7]. Li et al. [8] showed the necessity of assessing higher-order moments of the long-term loss considering stationary and non-stationary hazard models. In their study, the long-term loss under non-stationary hazards may have larger higher-order moments (i.e., skewness and kurtosis) compared with the stationary model. Uncertainties associated with stochastic occurrence and intensity of hazards are commonly modeled by stochastic processes. Stationary hazard arrivals can be modeled by stochastic processes with a stationary occurrence rate, e.g., the homogeneous Poisson process. Non-stationary hazards can be described as hazards considering time-varying or stochastic occurrence rates, e.g., the non-homogeneous Poisson process, the mixed Poisson process. In addition to statistical moments, probability distribution of the long-term loss can also be significant information for the decision-making process. For instance, Pandey and Van der Weide [9] highlighted the importance of assessing the probability distribution of the seismic damage cost of the engineering structures during the life-cycle. It shows that the full probability distribution provides a more realistic estimate of the potential damage cost. Wang and Zhang [10] proposed the probability-based loss estimation of structures subjected to tropical cyclone damage. Though these approaches successfully assessed the probability distribution, they may not be able to directly incorporate the statistical moments of long-term loss into the probabilistic assessment. Although Monte Carlo simulation (MCS) can be easily employed for estimating the distribution of long-term loss, it can be time-consuming under different scenarios since a large number of simulations are required for a single case. To address such limitation, this study aims to deliver a probabilistic approach based on maximum entropy method and statistical moments. The proposed approach provides substantial flexibility for the probabilistic assessment, as the statistical moments can be directly used to produce the probability density function (PDF) and cumulative distribution function (CDF) of long-term loss.

Maximum entropy method belongs to the so-called moment method. Moment method [11,12,13] is usually employed to fit the underlying PDF of a random variable of interest by using a finite number of moments as constraints in reliability analysis. Two main concerns to fit an unknown PDF are statistical moments estimation and the selection of an appropriate distribution model. For instance, a random variable of interest is usually the output of a complex system with various input variables, which makes it intractable for statistical moments estimation, especially for higher-order moments. To address this issue, dimension-reduction techniques are widely adopted to reduce the computational effort induced by estimating the high dimensional integration for higher-order moments, i.e., the so-called univariate dimension-reduction method [12] and the bivariate dimension-reduction method [13]. Besides dimension-reduction technique, cubature rules [14, 15] are also used for higher-order moments approximation. To further improve the efficiency and accuracy of moments estimation, adaptive dimension-reduction methods with delineation of cross terms [16] or sensitivity analysis [17, 18] are proposed. In this paper, long-term loss of civil infrastructures is of great interest and the statistical moments of long-term loss can be analytically obtained via using moment generation function (MGF) [8].

Once the statistical moments is obtained, the probability density function of a random variable can be recovered by a selected distribution model. Another key issue is to select an appropriate distribution model. In recent years, various parametric probability distribution models are proposed. A guideline for comparing different probability distribution models is summarized in reference [19], in which merits and drawbacks of various approaches are concluded. Pearson system [20] can be employed to construct PDF of a random variable through a family of different probability distribution types with the first-four statistical moments as constraints. As described in [18], the Pearson system is adopted to construct the PDF of structural response due to its flexibility. Generally, Pearson system can fit various types of PDFs. However, it may not produce stable results near the boundaries belonging to different distribution types [21]. Johnson system [22] contains three distribution types, i.e., lognormal, bounded and unbounded distributions. A mathematical transformation function is proposed to transform these distributions into standard normal distributions. Johnson system shows a low calculation performance for unknown parameters from estimated statistical moments [23]. Moreover, both Pearson system and Johnson system may not be available for an unbounded bimodal PDF even though Johnson system provides a bounded and restricted bimodal PDF. Saddlepoint approximations [24] can construct the PDF of a random variable by using a few statistical moments to determine the cumulant generating function, while it may encounter the numerical instability and the latent PDF usually follows an exponential form leading to un-accurate estimation for non-exponential PDF types [19]. The generalized lambda distribution is a flexible tool to fit different types of PDF but the accuracy may not be guaranteed in some regions of the skewness-kurtosis plane [25]. Recently, shifted generalized lognormal distribution with first-four statistical moments is developed to fit a PDF within almost the entire range of skewness and kurtosis [26, 27], whereas the complicated computation is required [28]. Some other forms of probability distribution models, e.g., Hermite model [29], cubic normal transformation [28, 30] are also adopted to fit unknown PDF.

Among various moment approaches, maximum entropy method (MEM) is considered the most unbiased one, which originates from the modern information theory. The main idea of MEM is to fit the underlying PDF by using the first-n statistical moments of a system output as constraints. The maximum entropy principle is developed by Jaynes [31], which can be used to select the most possible PDF from a large number of candidates since the process adds minimum spurious information. Shore and Johnson have proven that the MEM can satisfy all the conditions of consistency with only using statistical moments as constraints [32]. Hence, the MEM does not suffer the disadvantages existing in other probability distribution models mentioned above and it contains a whole family of generalized exponential distributions, e.g., normal, lognormal, gamma, beta and exponential distributions. Furthermore, MEM is available for multi-modal distribution types [33]. In recent years, MEM attracts great attention in structural reliability analysis due to its conceptual elegance. Ref. [33] developed a two-step strategy to evaluate the structural reliability efficiently, in which a normalized moment based quadrature rule with dimension-reduction technique is employed to estimate the first-four statistical moments of system output efficiently, then MEM can be applied to derive the PDF of structural response. A novel hybrid dimension-reduction method incorporates an improved MEM with GOpoly algorithm [34] and EBE method [35] is developed to improve the efficiency of structural analysis [17]. An improved maximum entropy method based on a nonlinear mapping and sparse grid numerical integration technique is proposed for achieving a good balance between accuracy and efficiency for structural reliability analysis [36]. Generally, the statistical moments used in MEM are usually integer moments, which means orders of moments are all integer. For practical application of complex engineering problems, the conventional MEM with integer moments as constraints may encounter a difficulty that the accuracy of integer moments may not be guaranteed, especially for higher-order moments, e.g., skewness and kurtosis. To mitigate this difficulty, fractional moments based MEM is introduced in structural reliability analysis [37], which means orders of moments are fractional values. Moreover, an improved dimension-reduction technique named multiplicative dimension-reduction method is developed for fractional moments estimation. Then Xu et al. [38, 39] expand the fractional moments-based MEM for structural dynamic systems incorporating with equivalent extreme value distribution. To further improve the accuracy, an adaptive scaled unscented transformation with maximum entropy method is developed for efficiently estimating the structural reliability analysis [40]. Recently, a transformed mixed-degree cubature rule and fractional moments based maximum entropy method is developed for reconstructing the unknown PDF of structural response [41]. Note that in structural reliability problems, the involved systems are quite complex and the failure probability estimation draws more attention to the long tail of PDF of system outputs, which requires an extremely accurate estimated PDF of a system output and significantly impede the process of reliability analysis. Fortunately, MEM can tackle this problem with a good trade-off between accuracy and efficiency. Hence, it can be concluded that MEM is a powerful tool to derive the PDF of a system output. However, to the best of the authors’ knowledge, the maximum entropy approach has not been incorporated in loss assessment to identify the probability distribution of long-term loss. In this paper, maximum entropy method incorporating higher-order analysis based on stochastic renewal process is developed. The probability distribution of long-term loss can be accurately constructed by MEM since the statistical moments can be analytically obtained via MGF [8].

In this paper, maximum entropy method incorporating the first-four analytical moments, i.e., mean, standard deviation, skewness and kurtosis, of long-term loss is proposed to derive the underlying PDF of long-term loss of civil infrastructure, which can significantly mitigate the computational effort induced by Monte Carlo simulation when analyzing various hazard scenarios. The organization of this paper can be summarized as follows. In Higher-order analysis of long-term loss section, the stochastic model of hazards is briefly reviewed and the framework of higher-order analysis for statistical moments of long-term loss is summarized. Maximum entropy method is presented in Maximum entropy method for deriving the PDF of long-term loss section and the application of the proposed framework to assess probabilistic long-term loss is illustrated in Illustrative example section to demonstrate the accuracy and efficiency of the proposed method.

Higher-order analysis of long-term loss

Stochastic model of hazards

During the long-term loss assessment, it is of great importance to quantify uncertainties associated with hazard frequency and intensity. Stochastic models are widely utilized to model hazard arrivals and the uncertain magnitudes. These stochastic models are commonly proposed based on historical data. For instance, on the basis of historical records, the homogeneous Poisson process (HPP) can be employed to model the occurrence of natural hazards, such as earthquakes [42, 43] and hurricanes [44, 45]. In recent studies, researchers indicated that the stationary stochastic models may not be able to capture non-stationary and time-dependent characteristics associated with hazard frequency and magnitude under various scenarios. To this end, some non-stationary stochastic models have been proposed to quantify such variability. For instance, the renewal process has been used to model the stochastic arrivals of strong earthquakes over a long-term period [46]. The occurrence of hurricanes related to climate change can be represented as a non-homogeneous or mixed Poisson process [47]. In this section, a stochastic renewal process is adopted to model hazard occurrence. The renewal process has been widely utilized in recent studies [48,49,50], as it is the generalization of a HPP.

A renewal process is a counting process with the inter-arrival times independently identically distributed (i.i.d). Considering the service life of the engineering structure is (0, tint], Tk represents the arrival time of kth hazard event. Hence, the arriving times of hazards are collections of k non-negative random variables, i.e., {T1, T2,..., Tk}, which can be defined as:

$${T_{k}} = {S_{1}} + {S_{2}} + \cdot \cdot \cdot + {S_{k}}$$
(1)

where {S1, S2,..., Sk} are inter-arrival times.

Based on the properties of renewal theory, the number of hazard events in the time interval (0, tint] can be expressed as:

$$\mathbb{E}\left[ {N\left({t_{\text{int}}} \right)} \right] = \sum\limits_{k = 1}^{\infty} {{F_{S}}^{k}} \left(t_{\text{int}} \right) = \Phi \left(t_{\text{int}} \right)$$
(2)

in which FSk(tint) is a k-fold convolution of inter-arrival time Sk. The notation Φ(·) represents the renewal function which satisfies an integration conditioning on the first arrival time t1. Assuming that the cumulative distribution function (CDF) of inter-arrival time is continuous, then the expected number of hazard events can be represented by the following equation:

$$\Phi \left(t_{\text{int}} \right) =\mathbb{E}\left[ {N\left(t_{\text{int}} \right)} \right] = \int_{0}^{\infty} {\mathbb{E}\left[ {N\left(t_{\text{int}} \right)|{S_{1}} = {t_{1}}} \right]} {\mathrm{d}}{{{F}}_{S}}\left({{t_{1}}} \right)$$
(3)

If the first arrival time t1 exceeds tint, there will be no hazard events occurring within the time period (0, tint], which also means that \(\mathbb {E}\left [ {N\left (t_{\text {int}} \right)} \right ] = 0\). Otherwise, the number of renewals gives:

$$\begin{aligned} \mathbb{E}\left[ {N\left(t_{\text{int}} \right)|{S_{1}} = {t_{1}} < t_{\text{int}}} \right] &= 1 + \mathbb{E}\left[ {N\left({t_{\text{int}} - {t_{1}}} \right)} \right]\\ &= 1 + \Phi \left({t_{\text{int}} - {t_{1}}} \right) \end{aligned}$$
(4)

Substituting Eq. (4) for Eq. (3), the expected number of hazard arrivals can be formulated as:

$$\Phi \left(t_{\text{int}} \right) = {F_{W}}\left(t_{\text{int}} \right) + \int_{0}^{t_{\text{int}}} {\Phi \left({t_{\text{int}} - {t_{1}}} \right){\mathrm{d}}{F_{S}}\left({{t_{1}}} \right)}$$
(5)

When the inter-arrival time Sk follows the exponential distribution, the renewal process mentioned above becomes a HPP, which is also known as a Poisson renewal process. The HPP has constant occurrence rate α. It represents the rate of hazard arrivals within a specific time unit, e.g., tint. Hence, for the HPP, the expected value of the number of hazard events can be calculated as:

$$\mathbb{E}\left[ {N\left(t_{\text{int}} \right)} \right] = \alpha t_{\text{int}}$$
(6)

The probability of n arrivals of hazard within a time interval is a Poisson distribution, which can be expressed as:

$${{\mathrm{P}}_{\mathrm{r}}}\left[ {N\left(t_{\text{int}} \right) = \;n} \right] = \frac{{{{\left({\alpha t_{\text{int}}} \right)}^{n}}\exp \left({ - \alpha t_{\text{int}}} \right)}}{{n!}},\;\;\;n = 0,1,2, \cdot \cdot \cdot$$
(7)

and the probability density function of the inter-arrival time S gives:

$${f_{S}}\left(s \right) = \alpha \exp \left({ - \alpha s} \right)$$
(8)

The definition of long-term loss

The long-term loss can be computed as the accumulative damage cost due to hazards over the service life of the engineering system. In this section, the framework of the analytical analysis of long-term loss is introduced. The analytical assessment is based on the moment generating function methods [8]. Herein, the definition of long-term loss, MGF and derivations of statistical moments of long-term loss are included. Given a time interval (0, tint], tint can be defined as the service life of civil infrastructures. Based on the renewal model mentioned above, the arrival time and inter-arrival time are denoted as Tk and Sk for the kth event, respectively, and Tk=S1+S2+···+Sk. The long-term loss Llt can be expressed as:

$${L_{lt}}\left(t_{\text{int}} \right) = \sum\limits_{k = 1}^{N\left(t_{\text{int}} \right)} {{L_{k}}\exp \left({ - r{T_{k}}} \right)}$$
(9)

where Lk represents the loss severity, which is the probability of failure given the hazards times the associated economic consequence induced by structural damage under the kth hazard event and r denotes a monetary discount rate. Note that the inter-arrival time is assumed to be independent of the loss severity Lk. Hence, the long-term loss is the total economic loss of civil infrastructures caused by hazards within a time domain (0, tint], which is shown in Fig. 1.

Fig. 1
figure 1

The long-term loss considering discount and stochastic hazard arrivals

Analytical statistical moments of long-term loss

Moment generation function

The analytical assessment of statistical moments of long-term loss can be based on the moment generating function approach. For a random variable, its raw moments can be derived by taking derivations of the MGF [51]. Hence, for a given random variable X, the MGF with \(\beta \in \mathbb {R}\) can be defined as:

$${M_{X}}\left(\beta \right) = \mathbb{E}\left[ {\exp \left({\beta X} \right)} \right]$$
(10)

Specifically, the first-two raw moments can be derived through taking the first and second derivatives of MGF:

$$\left\{\begin{array}{l} {{M_{X}}^{\prime}\left(\beta \right) = \frac{{\mathrm{d}}}{{{\mathrm{d}}\beta }}\mathbb{E}\left[ {\exp \left({\beta X} \right)} \right] =\mathbb{E}\left[ {\frac{{\mathrm{d}}}{{{\mathrm{d}}\beta }}\exp \left({\beta X} \right)} \right] = \mathbb{E}\left[ {X\exp \left({\beta X} \right)} \right]}\\ {{M_{X}}^{\prime\prime}\left(\beta \right) = \frac{{\mathrm{d}}}{{{\mathrm{d}}\beta }}\mathbb{E}\left[ {X\exp \left({\beta X} \right)} \right] = \mathbb{E}\left[ {\frac{{\mathrm{d}}}{{{\mathrm{d}}\beta }}X\exp \left({\beta X} \right)} \right] = \mathbb{E}\left[ {{X^{2}}\exp \left({\beta X} \right)} \right]} \end{array} \right.$$
(11)

Then taking β=0, the first raw moments gives:

$$\left\{\begin{array}{l} {{M_{X}}^{\prime}\left(0 \right) = \mathbb{E}\left[ X \right]}\\ {{M_{X}}^{\prime\prime}\left(0 \right) = \mathbb{E}\left[ {{X^{2}}} \right]} \end{array} \right.$$
(12)

Generally, the jth raw moment of the random variable can be calculated by taking the jth derivative of MGF at zero, which can be formulated as:

$${M_{X}}^{\left(j \right)}\left(0 \right) =\mathbb{E}\left[ {{X^{j}}} \right],\;j \ge 1$$
(13)

Analytical statistical moments based on moment generation function

On the basis of the MGF, the derivation of statistical moments of long-term loss relies on finding its MGF \({M_{{L_{lt}}}}\left (\cdot \right)\). Herein, the derivation of MGF of a Poisson renewal process is provided. Conditioning on N(tint)=n, the MGF of long-term loss can be formulated based on the law of total expectation:

$$\begin{aligned} {M_{{L_{lt}}\left(t_{\text{int}} \right)}}\left(\beta \right) &= \mathbb{E}\left[ {\exp \left({\beta {L_{lt}}\left(t_{\text{int}} \right)} \right)} \right]\\ &= \mathbb{E}\left[ {\exp \left({\beta \sum\limits_{k = 1}^{N\left(t_{\text{int}} \right)} {{L_{k}}\exp \left({ - r{T_{k}}} \right)}} \right)|N\left(t_{\text{int}} \right) = n} \right]\\ &= \exp \left({ - \alpha t_{\text{int}}} \right) + \sum\limits_{n = 1}^{\infty} {\mathbb{E}\left[ {\exp \left({\beta \prod\limits_{k = 1}^{n} {{L_{k}}\exp \left({ - r{T_{k}}} \right)}} \right)} \right]} \left[ {\frac{{{{\left({\alpha t_{\text{int}}} \right)}^{n}}}}{{n!}}{{\exp }^{- \alpha t_{\text{int}}}}} \right]\\ &= \exp \left({ - \alpha t_{\text{int}}} \right) + \exp \left({ - \alpha t_{\text{int}}} \right){\sum\limits_{n = 1}^{\infty} {\left[ {{M_{L}}\left({\beta \exp \left({ - r{t_{k}}} \right)} \right)} \right]}^{n}}\left[ {\frac{{{{\left({\alpha t_{\text{int}}} \right)}^{n}}}}{{n!}}} \right]\\ &= \exp \left[ {\alpha \int_{0}^{t_{\text{int}}} {\left[ {{M_{L}}\left({\beta \exp \left({ - rt} \right)} \right) - 1} \right]{\text{dt}}}} \right] \end{aligned}$$
(14)

where ML(·) is the MGF of loss severity L. The loss severity L is given as following the exponential distribution with a parameter θ, i.e., L∼EXP(θ), [52]. Then substituting ML into Eq. (14), the long-term loss under the HPP can be reformulated as:

$$\begin{aligned} {M_{{L_{lt}}\left(t_{\text{int}} \right)}}\left(\beta \right) &= \exp \left[ {\alpha \int_{0}^{t_{\text{int}}} {\left[ {\frac{\theta }{{\theta - \beta \exp \left({ - rt} \right)}} - 1} \right]{\mathrm{d}}t}} \right]\\ &= {\left[ {\frac{{\theta - \beta \exp \left({ - {rt}_{\text{int}}} \right)}}{{\theta - \beta }}} \right]^{\frac{\alpha }{r}}} \end{aligned}$$
(15)

Then the mean (u1) and standard deviation (u2) of long-term loss can be easily obtained by taking the first and second derivatives of MGF at β=0:

$${u_{1}} = \mathbb{E}\left[ {{L_{lt}}\left(t_{\text{int}} \right)} \right] = M_{{L_{lt}}\left(t_{\text{int}} \right)}^{\left(1 \right)}\left(0 \right) = \frac{\alpha }{{\theta r}}\left[ {1 - \exp \left({ - {rt}_{\text{int}}} \right)} \right]$$
(16)
$$u_{2}^{2} = M_{{L_{lt}}\left({{t_{{\text{int}}}}} \right)}^{\left(2 \right)}\left(0 \right) - {\left({M_{{L_{lt}}\left({{t_{{\text{int}}}}} \right)}^{\left(1 \right)}\left(0 \right)} \right)^{2}} = \frac{\alpha }{{{\theta^{2}}r}}\left[ {1 - \exp \left({ - 2r{t_{{\text{int}}}}} \right)} \right]$$
(17)

In this paper, the higher-order moments of long-term loss are also of great interest. Similarly, the skewness (u3) and kurtosis (u4) can be derived as [8]:

$$\begin{aligned} {u_{3}} &= \frac{{\mathbb{E}\left[ {L_{lt}^{3}\left({{t_{{\text{int}}}}} \right)} \right] - 3{u_{1}}u_{2}^{2} - u_{1}^{3}}}{{u_{2}^{3}}}\\ &= \frac{{2\alpha {r^{1/2}}\left[ {1 - \exp \left({ - 3r{t_{{\text{int}}}}} \right)} \right]}}{{{{\left[ {\alpha - \alpha \exp \left({ - 2\alpha {t_{{\text{int}}}}} \right)} \right]}^{3/2}}}}\\ \end{aligned}$$
(18)
$$\begin{aligned} {u_{4}} &= \frac{{\mathbb{E}\left[ {L_{lt}^{4}\left({{t_{{\text{int}}}}} \right)} \right] - 4{u_{1}}\mathbb{E}\left[ {L_{lt}^{3}\left({{t_{{\text{int}}}}} \right)} \right] + 6u_{1}^{2}\mathbb{E}\left[ {L_{lt}^{2}\left({{t_{{\text{int}}}}} \right)} \right] - 3u_{1}^{4}}}{{u_{2}^{4}}}\\ &= \frac{{\left({6r + 3\alpha} \right) + \left({6r - 3\alpha} \right)\exp \left({ - 2r{t_{{\text{int}}}}} \right)}}{{\alpha \left[ {1 - \exp \left({ - 2r{t_{{\text{int}}}}} \right)} \right]}} \end{aligned}$$
(19)

Maximum entropy method for deriving the PDF of long-term loss

Once the first four central moments of long term loss are obtained by the MGF, the PDF of long-term loss can be approximated accordingly by using the statistical moments and a given distribution model. In this paper, maximum entropy method is specifically used since it is regarded as the most unbiased approximation of a underlying PDF, which means that the MEM can find the most probable PDF among all candidate PDFs under statistical moments’ constraints.

Given a continuous random variable Y with PDF fY(y), the information-theoretical entropy of Y is defined as:

$${\mathcal{H}}\left({{f_{Y}}\left(y \right)} \right) = - \int_{Y} {{f_{Y}}\left(y \right)} \log \left({{f_{Y}}\left(y \right)} \right){\mathrm{d}}y$$
(20)

The estimation of underlying PDF fY(y) by using a finite number of statistical moments of Y as constraints is a classical problem in statistics, where the MEM developed by Jaynes [31] is one of the most popular approaches to address this problem. On the basis of principle of maximum entropy, the general constrained optimization formulation of MEM reads:

$$\left\{\begin{array}{ll} {{\text{Find}}:}&{{f_{Y}}\left(y \right)}\\ {{\text{Maximize}}:}&{H\left({{f_{Y}}\left(y \right)} \right) = - \int_{Y} {{f_{Y}}\left(y \right)} \log \left[ {{f_{Y}}\left(y \right)} \right]{\mathrm{d}}y}\\ {{\mathrm{s}}.{\mathrm{t}}.:}&{\mathbb{E}\left[ {{Y^{i}}} \right] = \int_{Y} {{Y^{i}}{f_{Y}}\left(y \right){\mathrm{d}}y,~~i = 1,2,...,k} } \end{array} \right.$$
(21)

where \(\mathbb {E}\left [ {{Y^{i}}} \right ]\)is the ith raw moment of Y, and k represents the given number of statistical moments, herein, k=4. For the long term loss Llt, the first-four analytical central moments are obtained by MGF mentioned above, i.e., u1u2u3u4. Before dealing with the optimization problem in Eq. (21), a standardized transformation is firstly adopted to transform the random variable Llt into a standard variable Y, which is given by:

$$Y = \frac{{L_{lt} - {u_{1}}}}{{{u_{2}}}}$$
(22)

Hence, the first-four raw moments of the standardized random variable Y are easily derived as:

$$\mathbb{E}\left[ Y \right] = 0,~\mathbb{E}\left[ {{Y^{2}}} \right] = 1,~\mathbb{E}\left[ {{Y^{3}}} \right] = {u_{3}},~\mathbb{E}\left[ {{Y^{4}}} \right] = {u_{4}}$$
(23)

Once the first-four raw moments of Y are obtained, the optimization problem in Eq. (21) can be solved by introducing the following Lagrangian function:

$$\begin{array}{*{20}l} {\mathcal{L}}\left[ {{\boldsymbol{\lambda }},{f_{Y}}\left(y \right)} \right] = &- \int_{Y} {{f_{Y}}\left(y \right)\log \left[ {{f_{Y}}\left(y \right)} \right]} {\text{dy}}\\ &- \left({{\lambda_{0}} - 1} \right)\left[ {\int_{Y} {{f_{Y}}\left(y \right)} {\text{dy}} - {\text{1}}} \right]\\ &- \sum\limits_{i = 1}^{k} {{\lambda_{i}}\left[ {\int_{Y} {{y^{i}}{f_{Y}}\left(y \right)} {\text{dy}} - \mathbb{E}\left[ {{Y^{i}}} \right]} \right]} \end{array}$$
(24)

where λ=[λ0,λ1,...,λk]T is a vector collecting the Lagrangian multipliers. For optimal solution, a key condition is given by:

$$\frac{{\partial {\mathcal{L}}\left({{\boldsymbol \lambda},{f_{Y}}\left(y \right)} \right)}}{{\partial {f_{Y}}\left(y \right)}} = 0$$
(25)

then a closed form of fY(y) can be obtained by:

$${\hat{f}_{Y}}\left(y \right) = \exp \left({ - {\lambda_{0}} - \sum\limits_{i = 1}^{k} {{\lambda_{i}}{y^{k}}}} \right)$$
(26)

in which λ0 is derived based on the normalization axiom in probability theory such that:

$${\lambda_{0}} = \log \left[ {\int_{Y} {\exp \left({ - \sum\limits_{i = 1}^{k} {{\lambda_{i}}{y^{i}}}} \right){\mathrm{d}}y}} \right]$$
(27)

Therefore, a closed form of fY(y) can be obtained by determining the Lagrangian multipliers λ. To this end, an equivalent un-constrained optimization formulation based on Kullback-Leibler (K-L) divergence is employed to solve the unknown parameters. As is known, K-L divergence can measure the divergence between the true PDF, fY(y) and its estimator, \({\hat {f}_{Y}}\left (y \right)\), which is expressed as [37]:

$$\begin{array}{*{20}l} K\left[ {{f_{Y}}\left(y \right),{\hat{f}_{Y}}\left(y \right)} \right] &= \int_{Y} {{f_{Y}}\left(y \right)\log \left[ {{f_{Y}}\left(y \right)/{\hat{f}_{Y}}\left(y \right)} \right]} {\mathrm{d}}y\\ &= \int_{Y} {{f_{Y}}\left(y \right)\log \left[ {{f_{Y}}\left(y \right)} \right]} {\mathrm{d}}y - \int_{Y} {{f_{Y}}\left(y \right)\log \left[ {{\hat{f}_{Y}}\left(y \right)} \right]} {\mathrm{d}}y \end{array}$$
(28)

Substituting Eq. (26) for \({\hat {f}_{Y}}\left (y \right)\), then Eq. (28) can be rewritten as:

$$K\left[ {{f_{Y}}\left(y \right),{\hat{f}_{Y}}\left(y \right)} \right] = - {\mathcal{H}}\left({{f_{Y}}\left(y \right)} \right) + {\lambda_{0}} + \sum\limits_{i = 1}^{k} {{\lambda_{i}}\mathbb{E}\left[ {{Y^{i}}} \right]}$$
(29)

in which \({\mathcal {H}}\left ({{f_{Y}}\left (y \right)} \right)\) is the entropy of the true PDF. When minimizing K-L divergence of true PDF and its estimator, the estimated PDF is close to the real one. Hence, the minimization of K-L divergence shown in Eq. (29) is an alternative choice of Eq. (21). Although the value of \({\mathcal {H}}\left ({{f_{Y}}\left (y \right)} \right)\) is unknown, it is invariant and independent of parameters λ. Thus, minimizing K-L divergence in Eq. (29) is equivalent to the minimization of the following objective function:

$$\begin{array}{*{20}l} \Gamma \left({\boldsymbol{\lambda }} \right) &= {\lambda_{0}} + \sum\limits_{i = 1}^{k} {{\lambda_{i}}E\left[ {{Y^{i}}} \right]} \\ &= \log \left[ {\int_{Y} {\exp \left({ - \sum\limits_{i = 1}^{k} {{\lambda_{i}}{y^{i}}}} \right){\mathrm{d}}y}} \right] + \sum\limits_{i = 1}^{k} {{\lambda_{i}}E\left[ {{Y^{i}}} \right]} \end{array}$$
(30)

It is noted that the objective function Γ(λ) is convex about λ. Therefore, a global minimum can be found by minimizing the objective function. The key condition is given as:

$$\frac{{\partial \Gamma \left({\boldsymbol \lambda} \right)}}{{\partial {\lambda_{i}}}} = 0 ~\to ~\int_{Y} {{y^{i}}{\hat{f}_{Y}}\left(y \right){\mathrm{d}}} y = \mathbb{E}\left[ {{Y^{i}}} \right],~i = 0,1,2,...,k$$
(31)

It is concluded that Eq. (31) is just the constrained conditions in Eq. (21). Therefore, the unknown parameters λ can be determined by using the following un-constrained optimization formulation:

$$\left\{ {\begin{array}{ll} \text{Find:}&{{\boldsymbol \lambda} = {{\left[ {{\lambda_{0}},{\lambda_{1}},...,{\lambda_{k}}} \right]}^{\mathrm{T}}}}\\ \text{Minimize:}&{{\Gamma} \left(\mathbf{\boldsymbol \lambda} \right) = \log \left[ {\int_{Y} {\exp \left({ - \sum\limits_{i = 1}^{k} {{\lambda_{i}}{y^{i}}}} \right){\mathrm{d}}y}} \right] + \sum\limits_{i = 1}^{k} {{\lambda_{i}}E\left[ {{Y^{i}}} \right]} } \end{array}} \right.$$
(32)

Once the Lagrangian multipliers are determined by Eq. (32), the PDF of long-term loss can be accordingly derived by:

$$\begin{array}{*{20}l} {\hat{f}_{L_{lt}}}\left(l_{lt} \right) &= \frac{1}{{{u_{2}}}}{\hat{f}_{Y}}\left(y \right)\\ &= \frac{1}{{{u_{2}}}}\exp \left({ - {\lambda_{0}} - \sum\limits_{i = 1}^{k} {{\lambda_{i}}{{\left({\frac{{l_{lt} - {u_{1}}}}{{{u_{2}}}}} \right)}^{k}}}} \right) \end{array}$$
(33)

The procedure of the proposed method incorporating with analytical statistical moments and maximum entropy method is shown in Fig. 2. The homogeneous Poisson process has been emphasized in this study. Furthermore, the proposed approach can be applied to non-stationary models, such as the non-Poisson renewal model, non-homogeneous Poisson process. The derivation of the associated moment generating functions and the first-four moments could be different, while the maximum entropy estimation process can still be adopted. Future studies should incorporate non-stationary hazard models and investigate the effect on the probability distribution of the loss.

Fig. 2
figure 2

The flowchart of the proposed method

Illustrative example

The proposed probabilistic approach is applied to a coastal bridge to quantify the long-term loss subjected to hurricane hazards. The investigated bridge is a multi-span simply girder bridge and this type of bridge is vulnerable to deck unseating failure under hurricanes [53, 54]. The stochastic Poisson process is employed to model the occurrence of hurricanes over the service life of the bridge. Poisson process is a special renewal process with exponentially distributed inter-arrival times. Therefore, the proposed renewal approach can be utilized. Given the information, the key inputs for the long-term loss assessment include the occurrence rate α, service life tint, distribution parameters of loss severity L, and the monetary discount rate r. By using the moment generating approach, the first-four analytical central moments of can be computed. Then the PDF of long-term loss can be constructed effectively through maximum entropy method with the first-four analytical central moments as constraints. The associated probabilistic distribution can provide significant information for the following risk management and decision-making.

In this example, the key parameter of hurricane hazard is the occurrence rate α of the stochastic Poisson process. It is computed as 0.245 by counting the annual number of hurricane events in the investigated region [8]. The loss severity refers to the damage cost of the system given the occurrence of the hazard. The loss severity can be computed as the product of the failure probability given hazard occurrence and the consequence, e.g., the repair cost of bridge superstructure due to deck unseating. The failure probability is commonly computed by the vulnerability assessment. For illustrative purposes, the probability of failure of the bridge is assumed to be 0.1. The repair cost of the bridge can be 12.832 million USD [55]. Then the loss severity can be computed as 1.283 million USD. Herein, the mean value of the loss severity is taken as 1.283 million USD and the loss severity is assumed to follow an exponential distribution. The monetary discount rate is 2% for a long time horizon. The service life of the bridge is 75 years. Given these parameters, the first-four central moments can be computed as 12.2099×106 USD, 4.3773×106USD, 0.6101, and 3.5411, which are essential inputs for MEM. The Lagrangian multipliers of MEM are estimated as λ=[0.3459,0.5015,−0.1287,0.0164]. Figure 3 shows the PDFs and CDFs in logarithm scale of the probabilistic long-term loss using the proposed method and Monte Carlo simulation. It is seen that the PDF and CDF in logarithm scale obtained by the proposed method accord well with those estimated by 106 MCS. For parametric analysis, when the service life reaches 150 years, the associated PDF and CDF can be reconstructed as shown in Fig. 4, where the first-four central moments of long-term loss are 14.9343×106USD, 4.4849×106USD, 0.5735, and 3.4922. The Lagrangian multipliers for MEM are estimated as λ=[0.3169,0.5012,−0.1163,0.0141]. Again, the results show the accuracy of the proposed method.

Fig. 3
figure 3

(a) PDF and (b) CDF of long-term loss obtained by Monte Carlo simulation and proposed method (75 years, r = 2%)

Fig. 4
figure 4

(a) PDF and (b) CDF of long-term loss obtained by Monte Carlo simulation and proposed method (150 years, r = 2%)

Apart from the service life, the monetary discount rate can be an essential parameter for the loss assessment. The probabilistic long-term loss considering the monetary discount rate at 3% is also estimated. Figures 5 and 6 show the probabilistic distributions of long-term loss with 75-year and 150-year service life, receptively. Herein, the monetary discount rate is r=3%. For 75-year service life, the first-four central moments are 9.3735×106 USD, 3.6461×106USD, 0.7108, and 3.7512. For 150-year, the first-four analytical central moments are 10.3614×106 USD, 3.3663×106USD, 0.7000, and 3.7349. The Lagrangian multipliers for the two cases are obtained as λ=[0.4030,0.5155,−0.1529,0.0197] and λ=[0.3968,0.5142,−0.1502,0.0192], respectively. Compared with the cases with r=2%, the losses with a larger discount rate are more right-skewed. The decision-makers may need to pay special attention to the potential tail risks.

Fig. 5
figure 5

(a) PDF and (b) CDF of long-term loss obtained by Monte Carlo simulation and proposed method (75 years, r = 3%)

Fig. 6
figure 6

(a) PDF and (b) CDF of long-term loss obtained by Monte Carlo simulation and proposed method (150 years, r = 3%)

Moreover, the loss severity can be affected by the structural performance, as it is the product of the failure probability and the repair cost. The structural performance can be reduced due to hazards or can be improved due to repair actions. To identify the impact of the changes in loss severity on the probability distribution of the long-term loss, Figs. 7 and 8 show the PDFs and CDFs of the long-term loss considering different loss severity. For instance, Fig. 7 indicates the case with doubled expected loss severity (\(\mathbb {E}\left [ L \right ] = 2 \times 1.283 \times {10^{6}}\) USD). The distribution of loss severity remains as the exponential distribution. The service life and the monetary discount rate are still 75 years and 2%, respectively.. By using the proposed approach, the first four central moments are 24.4197×106 USD, 8.7546×106USD, 0.6101, 3.5411 and the Lagrangian multipliers for MEM are λ=[0.3459,0.5015,−0.1287,0.0164]. Figure 8 gives the scenario when the expected loss severity is reduced by half (\(\mathbb {E}\left [ L \right ] = 0.5 \times 1.283 \times {10^{6}}\)). Likewise, the service life is 75 years and the monetary discount rate is 2%. The first four central moments are 6.1049×106 USD, 2.1886×106USD, 0.6101, 3.5411 and the associated Lagrangian multipliers for MEM are λ=[0.3459,0.5015,−0.1287,0.0164]. Again, the results produced by the proposed method are in good agreement with those obtained by MCS.

Fig. 7
figure 7

(a) PDF and (b) CDF of long-term loss obtained by Monte Carlo simulation and proposed method (\(\mathbb {E}\left [ L \right ] = 2 \times 1.283 \times {10^{6}}\) USD)

Fig. 8
figure 8

(a) PDF and (b) CDF of long-term loss obtained by Monte Carlo simulation and proposed method (\(\mathbb {E}\left [ L \right ] = 0.5 \times 1.283 \times {10^{6}}\) USD)

Moreover, the comparison of the first-four central moments obtained by the proposed method and MCS (106 runs) is provided in Table 1 and the unit of the mean and standard deviation is 106 USD. Both the proposed method and MCS are performed on a computer with Intel(R) Core(TM) i7-9750H CPU processor at 2.60GHz with 16 GB RAM. As seen from Table 1, the estimated first-four central moments accord well with those obtained by MCS, the maximum relative error is lower than 1%, which illustrates the accuracy of the proposed method. Furthermore, the CPU time required by the proposed method is just within a few seconds while that of MCS is much larger. For instance, in case 2, MCS consumes more than 600s while the proposed approach just takes 3.8s. The proposed method shows significant efficiency especially when taking various parameters into account for the probabilistic analysis of long-term loss. The results described from Figs. 3, 4, 5, 6, 7 and 8 show that the proposed method incorporating analytical statistical moments and maximum entropy method has indicated high accuracy in predicting the probabilistic distribution of long-term loss. Apart from the expected long-term loss, decision-makers may pay special attention to the upper tail of the long-term loss, as the upper tail is associated with the extreme cost. Therefore, the accuracy of the upper tail can be essential. For the case shown in Fig. 3, different percentiles of long-term loss obtained by the proposed method and MCS are listed in Table 2. It can be concluded that the proposed method can obtain comparable results with MCS, which also means that the proposed method can ensure the accuracy of the upper tail of PDF. Besides, the decision-makers can identify the characteristics of extreme losses based on the upper tail of the PDF of long-term loss. For instance, as shown in Table 2, when 95% percentile is of interest, the decision-makers can conclude that they are 95% confident that the long-term loss of this bridge does not exceed 20.16 × 106 USD within the 75-years service life. Such statements may be regarded as metrics for the risk-based decision-making.

Table 1 Relative errors of first-four central moments
Table 2 Different percentiles obtained by the proposed method and MCS

By employing the proposed approach, the PDF of long-term loss can be evaluated under various given parameters related to loss estimation, i.e., the monetary discount rate r, time interval tint and expected loss severity \(\mathbb {E}\left [ L \right ]\). Basic parameters are defined as r=2%, tint=75 and \(\mathbb {E}\left [ L \right ]=1.283\times 10^{6}\) USD and for different cases just one parameter varies for comparison. Figure 9 shows the PDF of long-term loss under different given parameters and the first-four central-moments under various parameters are listed in Table 3. It can be identified that the expectation of the long-term loss can be sensitive to the changes of the monetary discount rate, the expected loss severity, and the service life, as shown in Fig. 9(a) to (c). It can be seen in Fig. 9(b) that the standard deviation is less likely to be affected by the service life when the discount rate and loss severity remain unchanged. Seen from Table 3, the skewness and kurtosis are not affected by the change of loss severity under the same service life and discount rate.

Fig. 9
figure 9

PDF of long term loss (a) given monetary discount rate r ranges from 1% to 5%, (b) given time interval tint ranges from 50 years to 150 years and (c) given expected loss severity ranges from 0.5 × 1.283 × 106 to 2.5 × 1.283 × 106 USD

Table 3 The first-four central moments under different parameters

Concluding remarks

In this paper, a probabilistic analysis framework of the long-term loss of civil infrastructures under hazards is proposed. The probability distribution of the long-term loss can be effectively attained by incorporating the analytical statistical moments and the maximum entropy approach. Besides the mean and the standard deviation of long-term loss, higher-order moments of long-term loss, i.e., skewness and kurtosis, can be assessed by using the MGF based on the stochastic renewal process. Once the first-four analytical statistical moments are obtained, the maximum entropy method can be used for deriving the underlying probability density function of long-term loss. Compared with the Monte Carlo simulation, the proposed method can significantly reduce the computational effort of long-term loss assessment. The illustrative example demonstrates the accuracy and efficiency of the proposed approach. The PDF and CDF obtained by the proposed method accord well with those estimated by Monte Carlo simulation. The proposed method is conditioned on several assumptions. For instance, the hazard arrivals are based on the homogenous Poisson process. The analytical derivations of the statistical moments are based on an assumption that the loss severity is assumed to follow the exponential distribution. Future studies are encouraged to explore non-stationary hazard models and different loss severity models and vulnerability assessment should also be considered.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

HPP:

homogeneous Poisson process

MGF:

moment generation function

References

  1. Giouvanidis AI, Dong Y (2020) Seismic loss and resilience assessment of single-column rocking bridges. Bull Earthq Eng 18:4481–4513. https://doi.org/10.1007/s10518-020-00865-5.

    Article  Google Scholar 

  2. Zhu D, Li Y, Dong Y, Yuan P (2021) Long-term loss assessment of coastal bridges from hurricanes incorporating overturning failure mode. Adv. Bridge Eng. 2(1):1–15. https://doi.org/10.1186/s43251-020-00030-7.

    Article  Google Scholar 

  3. Cheng M, Frangopol DM (2021) Life-cycle optimization of structural systems based on cumulative prospect theory: Effects of the reference point and risk attitudes. Reliab Eng Syst Saf:108100. https://doi.org/10.1016/j.ress.2021.108100.

    Google Scholar 

  4. Goda K, Hong H (2006) Optimal seismic design considering risk attitude, societal tolerable risk level, and life quality criterion. J Struct Eng 132(12):2027–2035. https://doi.org/10.1061/(ASCE)0733-9445(2006)132:12(2027).

    Article  Google Scholar 

  5. Goda K, Hong H (2008) Application of cumulative prospect theory: Implied seismic design preference. Struct Saf 30(6):506–516. https://doi.org/10.1016/j.strusafe.2007.09.007.

    Article  Google Scholar 

  6. Cheng M, Frangopol DM (2022) Life-cycle optimization of structural systems based on cumulative prospect theory: Effects of the reference point and risk attitudes. Reliab Eng Syst Saf 218:108100. https://doi.org/10.1016/j.ress.2021.108100.

    Article  Google Scholar 

  7. Pandey MD, Van Der Weide J (2017) Stochastic renewal process models for estimation of damage cost over the life-cycle of a structure. Struct Saf 67:27–38. https://doi.org/10.1016/j.strusafe.2017.03.002.

    Article  Google Scholar 

  8. Li Y, Dong Y, Qian J (2020) Higher-order analysis of probabilistic long-term loss under nonstationary hazards. Reliab Eng Syst Saf 203:107092. https://doi.org/10.1016/j.ress.2020.107092.

    Article  Google Scholar 

  9. Pandey MD, van der Weide J (2018) Probability distribution of the seismic damage cost over the life cycle of structures. Struct Saf 72:74–83. https://doi.org/10.1016/j.strusafe.2017.12.007.

    Article  Google Scholar 

  10. Wang C, Zhang H (2018) Probability-based estimate of tropical cyclone damage: An explicit approach and application to Hong Kong, China. Eng Struct 167:471–480. https://doi.org/10.1016/j.engstruct.2018.04.064.

    Article  Google Scholar 

  11. Zhao Y-G, Ono T (2001) Moment methods for structural reliability. Struct Saf 23(1):47–75. https://doi.org/10.1016/S0167-4730(00)00027-8.

    Article  Google Scholar 

  12. Rahman S, Xu H (2004) A univariate dimension-reduction method for multi-dimensional integration in stochastic mechanics. Probabilistic Eng Mech 19(4):393–408. https://doi.org/10.1016/j.probengmech.2004.04.003.

    Article  Google Scholar 

  13. Xu H, Rahman S (2004) A generalized dimension-reduction method for multidimensional integration in stochastic mechanics. Int J Numer Methods Eng 61(12):1992–2019. https://doi.org/10.1002/nme.1135.

    Article  MATH  Google Scholar 

  14. Xu J, Kong F (2018) A cubature collocation based sparse polynomial chaos expansion for efficient structural reliability analysis. Struct Saf 74:24–31. https://doi.org/10.1016/j.strusafe.2018.04.001.

    Article  Google Scholar 

  15. Mysovskikh I (1980) The approximation of multiple integrals by using interpolatory cubature formulae In: Quantitative Approximation, 217–243.. Elsevier. https://doi.org/10.1016/B978-0-12-213650-4.50025-8.

    Chapter  Google Scholar 

  16. Liu R, Fan W, Wang Y, Ang AH-S, Li Z (2019) Adaptive estimation for statistical moments of response based on the exact dimension reduction method in terms of vector. Mech Syst Signal Process 126:609–625. https://doi.org/10.1016/j.ymssp.2019.02.035.

    Article  Google Scholar 

  17. Chen Z, Zhou P, Liu Y, Ji P (2019) A novel approach to uncertainty analysis using methods of hybrid dimension reduction and improved maximum entropy. Struct Multidiscip Optim 60(5):1841–1866. https://doi.org/10.1007/s00158-019-02294-8.

    Article  MathSciNet  Google Scholar 

  18. Xu J, Zhang Y, Dang C (2020) A novel hybrid cubature formula with pearson system for efficient moment-based uncertainty propagation analysis. Mech Syst Signal Process 140:106661. https://doi.org/10.1016/j.ymssp.2020.106661.

    Article  Google Scholar 

  19. Xi Z, Hu C, Youn BD (2012) A comparative study of probability estimation methods for reliability analysis. Struct Multidiscip Optim 45(1):33–52. https://doi.org/10.1007/s00158-011-0656-5.

    Article  MathSciNet  MATH  Google Scholar 

  20. Pearson K (1894) Contributions to the mathematical theory of evolution. Philos Trans R Soc Lond A 185:71–110.

    Article  MATH  Google Scholar 

  21. Youn BD, Xi Z, Wang P (2008) Eigenvector dimension reduction (EDR) method for sensitivity-free probability analysis. Struct Multidiscip Optim 37(1):13–28. https://doi.org/10.1007/s00158-007-0210-7.

    Article  MathSciNet  MATH  Google Scholar 

  22. Johnson NL, Kotz S, Balakrishnan N (1995) Continuous Univariate Distributions, Volume 2. Hoboken, Wiley.

  23. Li G, He W, Zeng Y (2019) An improved maximum entropy method via fractional moments with Laplace transform for reliability analysis. Struct Multidiscip Optim 59(4):1301–1320. https://doi.org/10.1007/s00158-018-2129-6.

    Article  Google Scholar 

  24. Huang B, Du X (2005) Uncertainty analysis by dimension reduction integration and saddlepoint approximations. J Mech Des 128(1):26–33. https://doi.org/10.1115/1.2118667.

    Article  Google Scholar 

  25. Corlu CG, Meterelliyoz M (2016) Estimating the parameters of the generalized lambda distribution: which method performs best?. Commun Stat-Simul Comput 45(7):2276–2296. https://doi.org/10.1080/03610918.2014.901355.

    Article  MathSciNet  MATH  Google Scholar 

  26. Low YM (2013) A new distribution for fitting four moments and its applications to reliability analysis. Struct Saf 42:12–25. https://doi.org/10.1016/j.strusafe.2013.01.007.

    Article  Google Scholar 

  27. Xu J, Dang C (2019) A new bivariate dimension reduction method for efficient structural reliability analysis. Mech Syst Signal Process 115:281–300. https://doi.org/10.1016/j.ymssp.2018.05.046.

    Article  Google Scholar 

  28. Zhao Y-G, Zhang X-Y, Lu Z-H (2018) Complete monotonic expression of the fourth-moment normal transformation for structural reliability. Comput Struct 196:186–199. https://doi.org/10.1016/j.compstruc.2017.11.006.

    Article  Google Scholar 

  29. Wintérstein SR (1988) Nonlinear vibration models for extremes and fatigue. J Eng Mech 114(10):1772–1790. https://doi.org/10.1061/(ASCE)0733-9399(1988)114:10(1772).

    Google Scholar 

  30. Zhao Y-G, Zhang X-Y, Lu Z-H (2018) A flexible distribution and its application in reliability engineering. Reliab Eng Syst Saf 176:1–12. https://doi.org/10.1016/j.ress.2018.03.026.

    Article  Google Scholar 

  31. Jaynes ET (1957) Information theory and statistical mechanics. Phys Rev 106(4):620. https://doi.org/10.1103/PhysRev.106.620.

    Article  MathSciNet  MATH  Google Scholar 

  32. Shore J, Johnson R (1980) Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans Inf Theory 26(1):26–37. https://doi.org/10.1109/TIT.1980.1056144.

    Article  MathSciNet  MATH  Google Scholar 

  33. Li G, Zhang K (2011) A combined reliability analysis approach with dimension reduction method and maximum entropy method. Struct Multidiscip Optim 43(1):121–134. https://doi.org/10.1007/s00158-010-0546-2.

    Article  MATH  Google Scholar 

  34. Rajan A, Kuang YC, Ooi MP-L, Demidenko SN, Carstens H (2018) Moment-constrained maximum entropy method for expanded uncertainty evaluation. IEEE Access 6:4072–4082. https://doi.org/10.1109/ACCESS.2017.2787736.

    Article  Google Scholar 

  35. Hao W, Harlim J (2018) An equation-by-equation method for solving the multidimensional moment constrained maximum entropy problem. Commun Appl Math Comput Sci 13(2):189–214. https://doi.org/10.2140/camcos.2018.13.189.

    Article  MathSciNet  MATH  Google Scholar 

  36. He W, Zeng Y, Li G (2019) A novel structural reliability analysis method via improved maximum entropy method based on nonlinear mapping and sparse grid numerical integration. Mech Syst Signal Process 133:106247. https://doi.org/10.1016/j.ymssp.2019.106247.

    Article  Google Scholar 

  37. Zhang X, Pandey MD (2013) Structural reliability analysis based on the concepts of entropy, fractional moment and dimensional reduction method. Struct Saf 43:28–40. https://doi.org/10.1016/j.strusafe.2013.03.001.

    Article  Google Scholar 

  38. Xu J (2016) A new method for reliability assessment of structural dynamic systems with random parameters. Struct Saf 60:130–143. https://doi.org/10.1016/j.strusafe.2016.02.005.

    Article  Google Scholar 

  39. Xu J, Kong F (2018) An adaptive cubature formula for efficient reliability assessment of nonlinear structural dynamic systems. Mech Syst Signal Process 104:449–464. https://doi.org/10.1016/j.ymssp.2017.10.039.

    Article  Google Scholar 

  40. Xu J, Kong F (2019) Adaptive scaled unscented transformation for highly efficient structural reliability analysis by maximum entropy method. Struct Saf 76:123–134. https://doi.org/10.1016/j.strusafe.2018.09.001.

    Article  Google Scholar 

  41. He S, Xu J, Zhang Y (2021) Reliability computation via a transformed mixed-degree cubature rule and maximum entropy. Appl Math Model. https://doi.org/10.1016/j.apm.2021.11.016.

    Google Scholar 

  42. Kagan YY, Jackson DD (1991) Long-term earthquake clustering. Geophys J Int 104(1):117–133. https://doi.org/10.1111/j.1365-246X.1991.tb02498.x.

    Article  Google Scholar 

  43. Rackwitz R (2002) Optimization and risk acceptability based on the life quality index. Struct Saf 24(2-4):297–331. https://doi.org/10.1016/S0167-4730(02)00029-2.

    Article  Google Scholar 

  44. Elsner JB, Bossak BH (2001) Bayesian analysis of us hurricane climate. J Clim 14(23):4341–4350. https://doi.org/10.1175/1520-0442(2001)014<4341:BAOUSH>2.0.CO;2.

    Article  Google Scholar 

  45. Katz RW (2002) Stochastic modeling of hurricane damage. J Appl Meteorol 41(7):754–762. https://doi.org/10.1175/1520-0450(2002)041<0754:SMOHD>2.0.CO;2.

    Article  Google Scholar 

  46. Matthews MV, Ellsworth WL, Reasenberg PA (2002) A brownian model for recurrent earthquakes. Bull Seismol Soc Am 92(6):2233–2250. https://doi.org/10.1785/0120010267.

    Article  Google Scholar 

  47. Ellingwood BR, Lee JY (2016) Managing risks to civil infrastructure due to natural hazards: communicating long-term risks due to climate change In: Risk Analysis of Natural Hazards, 97–112.. Springer. https://doi.org/10.1007/978-3-319-22126-7_7.

    Chapter  Google Scholar 

  48. Yang DY, Frangopol DM (2019) Life-cycle management of deteriorating civil infrastructure considering resilience to lifetime hazards: A general approach based on renewal-reward processes. Reliab Eng Syst Saf 183:197–212. https://doi.org/10.1016/j.ress.2018.11.016.

    Article  Google Scholar 

  49. Ramkrishnan R, Kolathayar S, Sitharam T (2021) Probabilistic seismic hazard analysis of north and central himalayas using regional ground motion prediction equations. Bull Eng Geol Environ 80(10):8137–8157. https://doi.org/10.1007/s10064-021-02434-9.

    Article  Google Scholar 

  50. Li Y, Dong Y, Frangopol DM, Gautam D (2020) Long-term resilience and loss assessment of highway bridges under multiple natural hazards. Struct Infrastruct Eng 16(4):626–641. https://doi.org/10.1080/15732479.2019.1699936.

    Article  Google Scholar 

  51. Ross SM (2014) Introduction to Probability Models. San Diego, Academic press.

  52. Smith RL (2003) Statistics of extremes, with applications in environment, insurance, and finance. Extreme Values in Finance, Telecommunications, and the Environment. Chapman and Hall/CRC, New York, pp 20–97

    Google Scholar 

  53. Li Y, Dong Y (2019) Risk-informed hazard loss of bridges in a life-cycle context In: ICASP13. https://doi.org/10.22725/ICASP13.120.

    Google Scholar 

  54. Li Y, Dong Y, Zhu D, et al (2002) Copula-based vulnerability analysis of civil infrastructure subjected to hurricanes. Struct Infrastruct Eng 16:626–641. https://doi.org/10.3389/fbuil.2020.571911.

    Article  Google Scholar 

  55. Mondoro A, Frangopol DM, Soliman M (2017) Optimal risk-based management of coastal bridges vulnerable to hurricanes. J Infrastruct Syst 23(3):04016046. https://doi.org/10.1061/(ASCE)IS.1943-555X.0000346.

    Article  Google Scholar 

Download references

Acknowledgments

The main works of this research were conducted in the Hong Kong Polytechnic University (PolyU), the authors gratefully acknowledge the support of PolyU and international collaborators.

Funding

This study has been supported by the Research Institute for Sustainable Urban Development, the Hong Kong Polytechnic University (PolyU1-BBWM), the National Natural Science Foundation of China (Grant No. 52078448), and the Research Grants Council of the Hong Kong Special Administrative Region, China (No. PolyU 15219819 and PolyU 15221521).

Author information

Authors and Affiliations

Authors

Contributions

ZY built the framework of the proposed method and was a major contributor in writing this manuscript. LY analyzed and interpreted the data and wrote this manuscript. DY helped review and edit this manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yaohan Li.

Ethics declarations

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Li, Y. & Dong, Y. Probabilistic analysis of long-term loss incorporating maximum entropy method and analytical higher-order moments. J Infrastruct Preserv Resil 3, 7 (2022). https://doi.org/10.1186/s43065-022-00052-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43065-022-00052-7

Keywords