DOC

A bounded optimal control for maximizing the reliability of randomly excited nonlinear oscillators with fractional derivative damping

By Paula Pierce,2014-09-06 21:58
5 views 0
A bounded optimal control for maximizing the reliability of randomly excited nonlinear oscillators with fractional derivative damping

    豆丁网地址,/msn369

    A bounded optimal control for maximizing the reliability of

    randomly excited nonlinear oscillators with fractional

    derivative damping 125 CHEN Licong, ZHU Weiqiu

    (1. College of Civil Engineering, Huaqiao University, FuJian XiaMen 361021;

    2. Departments of Mechanics, Zhejiang University, HangZhou 310027)

    Abstract: In this paper,a bounded optimal control for maximizing the reliability of randomly excited nonlinear oscillators with fractional derivative damping is proposed. First, the partially averaged Itô

    10 equations for the energy processes are derived by using the stochastic averaging method. Then, the dynamical programming equations for the control problems of maximizing the reliability function and maximizing the mean first passage time are established from the partially averaged Itô equations by using the dynamical programming principle and solved to obtain the optimal control law. The conditional reliability function and mean first passage time of the optimal control system are obtained

    15 by solving the backward Kolmogorov equation and Pontryagin equation associated with the fully averaged Itô equations, respectively. The application of the proposed procedure and effectiveness of the control strategy are illustrated by using two examples. In addition, the effect of fractional derivative order on the reliability of optimally controlled system is examined.

    Keywords: Stochastic optimal control; fractional derivative damping; nonlinear oscillator; stochastic

    20 averaging method; stochastic dynamical programming

    0 Introduction

    Fractional calculus leads itself to a wide range of applications in science and engineering. As applications in mechanics, however, it is generally recognized as an excellent tool to model the

    25 frequency-dependent damping behavior of viscoelastic materials. Preliminary investigation in this

    [1]field was owned to Gemant , who first suggested a fractional derivative constitutive relationship to model cyclic-deformation tests performed on viscoelastic material specimens. Later, at the

    [2,3] beginning of the1980s, Bagley and Torvik have provided the theoretical basis for the use of

    fractional derivative models to characterize the viscoelasticity. Thereafter, there have been many

    [4-9] 30 further investigations and comprehensive reviews published on this subject, e.g. see Refs.and

    the references cited therein.

    Since the stochastic perturbations are common in practice, the stochastic dynamical systems with fractional derivative damping have been also received much attention in the last three

    [10][11] decades. For example, Spanos and Zeldin , and Rudinger obtained the stochastic response of

    35 the fractionally damped systems using a frequency domain approach. Based on the Laplace

    [12] transform technique, Agrawal presented a Duhamel integral expression for the stochastic

    [13] dynamic systems with fractional derivative damping. Later, Ye et al developed a similar

    scheme, where the Duhamel integral is derived by using the Fourier-transform-based technique.

    [14] Recently, Spanos and Evangelatos have obtained the response of a single-degree-of-freedom

    40 (SDOF) nonlinear system with fractional derivative damping using time-domain simulation and

    [15] statistical linearization technique. Huang and Jin have studied the response and stability of the

    SDOF strongly nonlinear oscillators with fractional derivative damping under Gaussian white

     noise excitations using the stochastic averaging method. Chen and his coworkers extended this

    Foundations: The work reported in this paper was supported the National Natural Science Foundation of China under Grant Nos. 10932009, and 11002059, and Fujian Province Natural Science Foundation of China under Grant No.2010J05006, and Specialized Research Fund for the Doctoral Program of Higher Education under Grant No. 20103501120003, and Fundamental Research Funds for Huaqiao University under Grant No.JB-SJ1010.

     Brief author introduction:CHEN Licong,1981-, man, lecturer, main research:nonlinear stochastic dynamics

    and control. E-mail: chen000241@126.com

    豆丁网地址,/msn369

    豆丁网地址,/msn369

     [16] method to investigate the stability of randomly excited nonlinear oscillators with fractional

    [17] 45 derivative damping, and first passage failure of MDOF quasi integrable Hamiltonian systems

    [18] with fractional derivative damping later. More recently, Mario et al has computed the

    stochastic response of a linear system with fractional derivative damping subjected to stationary

    and non-stationary random excitations, in which the key idea has been generalized to fractional

    [19]damped Duffing oscillators subjected to a stochastic input . So far, to the authors' knowledge,

    50 no work on the optimal control of stochastic dynamical systems with fractional derivative

    damping is available.

    On the other hand, by using the stochastic averaging method and stochastic dynamical

    [20, 21][22] programming principle, Zhu and his co-workers , and Deng and Zhu have proposed a

    nonlinear stochastic optimal control strategy for maximizing the reliability of MDOF 55 quasi-Hamiltonian systems in the recent years. It has been shown that the optimal control indeed

    improves the reliability of systems. In this paper, such strategy is now generalized in a

    straightforward manner to maximize the reliability of n-degree-of-freedom nonlinear oscillators

    containing fractional derivative damping under random excitations. Two examples are treated for

    illustration. The analytical solutions are validated by using the Monte Carlo simulation of original 60 system.

    1 Stochastic averaging

    Consider a controlled n-degree-of-freedom nonlinear oscillator with fractional derivative

    damping subjected to Gaussian white noise excitations. The equation of motion is α1 2 i X+ ε C (X,X)DX (t) + g ( X ) = ε u (X,X) + ε f(X,X)ξ(t )i i i i i ik k

     65 i, j = 1, 2,..., n; k =1, 2,...,l (1) T T where X=[X,X,…,X]and X=[ X, X,…, X]are generalized displacement vector and12n1 2 n generalized velocity vector, respectively. ε is a positive small parameter; the term g(X) is linear iand/or nonlinear decouple stiffness, which satisfies 1/2g(?X)=?g(X); the term εudenotes the weak i ii feedback control; εfdenotes the amplitude of weakly external and/or parametric excitations; k 70 ξ(t) is Gaussian white noises with correlation functions E[ξ(t)ξ(t+τ)]=2Dδ(τ), i,j=1,…,l; the kijij α α i i ε C(X,X) D X (t)DX (t) termdenotes the small damping term, in whichdenotes thei i i [23,24]fractional derivative operation of RiemannLiouville definition , i.e., t (t ?τ ) X 1 dα i i DX (t ) = dτ, 0<α?1 (2) i ?αiΓ(1?α ) dt τ i 0

    Γ(?) is gamma function and α is the fractional order. Note that the term α i 75 ε C(X,X) D X (t) may be the linear or nonlinear fractional derivative model. For the nonlineari i

    fractional derivative model, however, it may be used to investigate dynamical behaviors of [25]viscoelastic body with a compressive pre-displacement .

    When ε is small, the response of the equation (1) can be assumed as follows: X = AcosΘ(t) i i i 80 (3) X=? Aν ( A,Θ) sinΘ(t)i ii i i i where

    (4) Θ(t ) =Φ(t) + Γ (t ) i i i

     dΦ2[U( A) ?U( AcosΘ)] i i i i i i ν ( A ,Θ ) = =(5)i i i 2 2 dt Asin Θi i

豆丁网地址,/msn369

    豆丁网地址,/msn369

     qi U (q ) =g (u)du(6)i i i? 0

    85 in which A,Θ, Φ, Γare all random processes. A, v(A,Θ),and Γare the amplitude,iiii iiiii instantaneous frequency, and phase angle, respectively, of the i-th oscillator.

     Θ) into Fourier seriesExpand v(A,iii

     ? (7) ν ( A,Θ) = ω( A) + ω( A) cos ?i i i i 0 i ir i rΘi r =1 Integrating Eq.(7) with respect to Θfrom 0 to 2π leads to the averaged frequencyi 2π190 ω ( A ) = ( A )(8)( A ,Θ )dΘ = ω

    ν i i ? i i i i i 0 i 02π (9) of the i-th oscillator. Then, Θ(t) can be approximated as i

    Θ(t) ? ω( A)t + Γ i i i i

    By using transformation in Eq.(3) and relation in Eq.(9), Eq.(1) can be converted into the

     following equations:

     dA (11) (12) 1 2(1) i 95 = ε [ F (A,Θ) + F (A, Θ, u)] + ε G(A, Θ)ξ(t)(10a)ki iik dt

     dΘ (21) 1 2(2) ( 22) i = ω ( A ) + ε [ F (A,Θ) + F (A, Θ, u)] + ε G(A,Θ)ξ(t)(10b)ki i i iik dt

    TTwhere A=[A,A,…,A], Θ=[Θ,Θ,…,Θ].12n12n (11) α i F = Aν sinΘ [C D( A cosΘ )] g ( A ) i i i i i i i i (12) F = Aν u sin Qg ( A )i i i i i i α( 21) i F =ν sinΘ cosΘ [C D( A cosΘ )] g ( A ) i i i i i i i i (22) g ( A) F=ν ucos Q i ii i i(1) G = Aν fsinΘg ( A )i i ik i iik (2) (11) G =ν fsinΘcosΘg ( A) i ik i ii ik Note that Eq.(10) can be modelled as Stratonovich stochastic differential equations and

    100 converted into Itô stochastic differential equations by adding the Wong-Zakai correction terms.

    The result is

     (1) 1 2(1) dA = ε m(A,Θ)dt + ε σ (A, Θ)dB (t )i i ik k (2) 1 2(2) dΘ = [ω ( A ) + ε m(A,Θ)]dt + ε σ (A, Θ)dB (t)(12)i i i i kik

    where B(t) denotes the unit Wiener processes, k

     (1) (11) (12) (1) (1)( 2) (1) m(A,Θ) = F + F + D G?G?A + D G?G?Θi i i kkii kkikikikik 1212 2121 (2) ( 21) (22) (1) ( 2) (2) (2) m(A, Θ) = F + F + D G?G?A + D G?G ?Θi i i kki kkiik ikikik12 12 1 221 (1) (1)(1) (1) (1) b(A, Θ) = σ σ = 2D GG105ij ie je kkikjk12 12(2) (2)(2) (2) (2) b(A,Θ) = σ σ = 2D GGij ie je kkjk ik12 21(0) (1)(2) (1)( 2) b(A, Θ) = σ σ = 2D GGij ie je kkikjk12 12

    (13) i, j = 1, 2,..., n; k, k=1, 2,...,l 1 2 T T Eq.(12) shows that [A,A,…A]are slowly varying processes while [Θ,Θ,…,Θ]are 12n12nrapidly varying processes. Suppose that system (1) has no internal resonance, i.e., kω(A)?0(ε). i ii[26, 27]According to a theorem due to Stratonvich-Khasminskii , A(t) in Eq.(12) weakly converge to

    110 an n-dimensional diffusion Markov process as ? 0 , in a time interval 0 ?t ? T , whereε

    ?1 T ? O(ε ) , i.e.,

豆丁网地址,/msn369

    豆丁网地址,/msn369

     (12) (14) dA= [m(A) +(A,Θ,u) ]dt + σ (A)dB(t )F i i iik k t in which

    (1)(1) ?G?G ikic(2)(11) (1) 1 1 m (A) = εF + DG+ G Di i kkkkik ik 12 12 22?A?Θ i it (1) (1) b (A) = σ σ = ε2D GG(15) kij i j kikjk12 12 t Here, denotes the averaging operation, i.e., 115 ?•?t T2 π 1 1 lim d Θ (16)?•? = dt = t ??n 00T ?? T (2π)

    (11) Note that the detail derivation procedure for term < F > is presented in appendix and cani t [17](12) , and the terms containing control forces be also available in Ref.< F > will be averaged i t later.

    Since 120

    (17) = U( A) = U(? A)H i i i i i

    By following the Itô differential rule, the partially averaged Itô equations for Hcan be i

    derived from the Eq.(14) for amplitudes Aas follows: i 11 12 = [m(H) + m(H,u )]dt + σ(18)dH (H)dB (t)i i i i ie e

    125 where

    TH = [ H, H ,..., H ]1 2 n ??( A)] d[ g1 11 i i m= ε m g ( A ) + b ? ?i i i i ii 2 dA?1 ? i ? A =U ( H )i i i 12 (12) m= ε F(A,Θ,u)g( A)= ε uX {} i { ii i i }i i 1? 1 ? t t A =U ( H)( )A H=U i i i i i i

    ?(19) b= σ σ = g( A) g ij ieje i i j ?1 ?A=U ( H )i i i

    ( A)bj ij

    2 Nonlinear stochastic optimal control For a mechanical or structural dynamical system, H usually represents the total energy of the system while Hrepresents the energy of i-th degree-of-freedom of the system. Besides, Hmay i i

    130 vary in sub-interval of [0,?). Therefore, it is reasonable to assume that the first passage failure

    occurs when H(t) exceed certain critical value Hfor the first time. For the system (18), however, c the safety domain is bounded with boundaries Γ(at least one of Hvanishes) and critical s 0 i boundary Γ. Let J(u) denote the reliability function of system (18), which is defined as the cprobability of process H(t) being in the safety domain within the time interval [0,t), i.e., s f135 J (u) = ProbH(τ ,u)? ,τ ?[t, t](20) {} s f

    For the control problem of the reliability maximization of system (18), introduce the value

    function

    (21) V (t, H) = sup ProbH(τ , u) ? ,τ ?[t, t]{}s f u?U Using the dynamical programming principle, we can obtain the following dynamical

    programming equation: 140 2n n n ?V ?V1 ? V 11 12 (22)=[m(H) + m(H,u)] + b(H)??? i i ij ?t ?H2 ?H ?H ii ji =1i =1 j =1

豆丁网地址,/msn369

    豆丁网地址,/msn369

     The associated boundary conditions are

     V (t, H) = 0at Γ(23)c (24)at ΓV (t, H) = finite0 The final time condition is145

    (25) V (t, H) =1, H < H f c Eqs.(22-25) are the mathematical formulation for the feedback maximization problem of the

     reliability function of system (18).

     Similarly, the control problem of maximizing the mean first passage time of system (18) can

    150 be formulated. Let E[τ(H,u)] denote the mean first passage time of the controlled system. Define the value function V(H) = sup E[τ (H, u)] , (26) 1 u?U which denotes the mean first passage time of optimally controlled system. Based on the dynamical programming principle, the following dynamical programming

    155 equation for value function V(H) can be obtained: 1

     n n n2 ?V 1 ? V1 b (H) (27) ?1 =11 12 1 ?? m(H) + m(H,u)+

    ?? ij ? ? i i ? ? ? ?

    H 2 i i =1 j =1 i =1 The boundary conditions are H H i jV(H) = 0at Γ(28)c 1

    V(H) = finiteat Γ(29) 0 1

    160 Eqs.(27-29) are the mathematical formulation for the problem of feedback maximization of

    mean first passage time of system (18)

    The optimal control law can be determined from maximizing the right-hand of Eqs.(22) or

     (27) with respect to u?U. Suppose that the control constraints are of the form

    (30) ?u?u ?u ,i = 1,..., n i 0 i i 0

    165 where uare the position constants. The optimal control law can be determined when i0 | u|= uand [uX?V?H] or [uX?V] is positive, i.e.,?Hi i i i i 0i i i 1 (31a) = usgn[ X?V u ?H] i 0 i ii

    opt

     or (31b) = usgn[ X?V?H] i 0 i 1i opt u i

     170Since the reliability function and mean first passage time are the monotonously decreasing

    and functions of H, i.e., ?V ?H< 0?V?H< 0 , Eq.(31) can be reduced as ii 1i opt (32) u = ?usgn( X) i 0 i i opt Inserting into Eq.(18) to replace uand complete averaging, we can obtain the final u i i

     dynamical programming equation for the control problem of reliability maximization as follows: 2n n n ? V ?V 1 ?V1 175(33)=m(H) + b(H)? ?? ij i ?t ?H 2 ?H ?H ii ji =1 j =1i =1 where

    豆丁网地址,/msn369

    豆丁网地址,/msn369

     11 1 12 opt (34) m(H) = m(H) + m(H,u)i i i i The boundary and final time conditions remain the same as those in Eqs.(23-25). Similarly, the final dynamical programming equation for the control problem of mean first

    passage time maximization is 180

    豆丁网地址,/msn369

    豆丁网地址,/msn369

     2n n n ?V ? V1 1 (35) b (H) ?1 =1 1 m(H) +

    ?H?H ?H ii j? i ?? ij

    i =1 i =1 j =1 2 The associated boundary conditions are the same as those in Eqs.(28) and (29).

    3 Backward Kolmogrov equation and Pontryagin equation for

    optimal controlled system 1 185 Inserting m into Eq.(18) to replace its drift coefficients yields the fully averaged Itô i

     equations of optimally controlled system 1 dH = m(H)dt + σ(H)dB (t)(36)i i ie e

     Let R(t|H) denote the conditional reliability function, which is defined as the probability of 0 process H(t) remaining in the safety domain within the time interval (0,t] given initial state Hs 0 190 being in the safety domain , i.e.,s

    (37) R(t | H) = ProbH(τ )? ,τ ? (0, t] | H? {}0 s 0 s

    Obviously, the conditional reliability function R(t|H) satisfies the backward Kolmogorov 0

    equation 2n n n 1 ?R ? R ?R 1 (38)=m(H)+ b(H)? i 0 ?? ij 0 ?t?H2 ?H?H i 0i 0j 0i =1i =1 j =1 195with boundary conditions

    R(t | H) = 0at Γ(39) c 0 R(t | H) = finiteat Γ(40)0 0 and initial condition (41) R(0 | H) = 1 , H?Ω0 0 s 200The conditional probability density of the first passage time can be derived as follows: ?R(t | H ) 0 (42) p(Τ | H)=? 0 ?tt =T where T is the first passage time.

    Similarly, the mean first passage time of the optimal controlled system can be obtained by

    solving the Pontryagin equation 2 n n n µ ? ?µ 1 205(43)1 b= ?1??ij+ m i ??H2 ?H?H i =1 i 0 i =1 j =1 i 0j 0

    together with the boundary conditions

    µ (H) = 0 at Γ(44) c 0

    µ (H) = finiteat Γ(45) 0 0 The reliability function of optimally controlled system can be obtained by solving the

    210 corresponding final dynamical programming equation (33) with boundary conditions Eqs.(23,24)

     and final time condition (25), or by solving the backward Kolmogorov equation (38) with its boundary conditions (39,40) and initial condition (41). The mean first passage time of optimally

     controlled system can be obtained either by solving the final dynamical programming equation (35) together with boundary conditions (28-29) or by solving the Pontryagin equation (43) together

    with its boundary conditions (44,45). The differences between the two sets of equations is that in 215

    豆丁网地址,/msn369

    豆丁网地址,/msn369

    Eq. obtained:

    (33)

    the tim

    e t goe

    s

    bac

    kwa

    rd

    fro

    m

    the final

    tim

    e tf to 0

    whi

    le

    in

    Eq.(38) the tim

    e t

    goe

    s

    for

    war

    d

    fro

    m

    initi

    al

    tim

    e 0 to t. f

    Thu

    s,

    the followi

    ng

    exp

    ressions

    can be

    豆丁网地址,/msn369

    豆丁网地址,/msn369

     R(t | H) = V (t- t,H) opt 0 f H=H 0 (46) µ(H) = V(H) opt 0 1 H=H 0

     220 4 Examples

4.1 Example 1

    Consider the controlled Bagley-Torviks equation subjected to random excitation. The

    motion equation is of the form α 2 (47) X(t ) + β D X (t) + ωX = u + (t) ξ0

     225 where β,ωare positive constants; α (0<α?1) is the fractional order; u is the control force0

    subjected to the control constrain | u |? u; ξ(t) is an independent Gaussian white noise with0 intensity of 2D.

    By following the procedure proposed in Section 2, the partially averaged Itô equation for H

     can be derived as the same form of Eq.(18) with the following drift and diffusion coefficients: 230 (48) m = m( H ) + m(H ,u);b= 2DH12 11 11 where

     β H sin( π 2)α ? ?= ? uAωsinΘ (49)0 m =? + D; m

    0 12 2H ω Θ ? ? A= 011 1?α 0 ω * Obviously, the optimal control strategy is the form of Eq.(32). Substituting uinto Eq.(48) to

     replace u and completing averaging, we can get the fully averaged Itô equation as follows: 235 (50) dH = m( H )dt + σ(H )dB(t )1 1 1 where

    2H π (51) m= m?2 u 1 11 0

    The backward Kolmogorov equation associated with the fully averaged Itô equation (50) is of

    the form

     2 ?R ?R? R+ DH 240 (52) = m(H ) 1 0 0 2?t ?H?H 0 0

    The boundary conditions and initial condition are

    R(t | H ) = 0 , H=H(53) cr 0

     ?R ?R , H=0 (54) = D 1?H ?t 0 R(0 | H ) = 1 , 0?H(55)cr 0 245 Similarly, the Pontryagin equation associated with the fully averaged Itô equation (50) is of

     the form

     2 ?µ? µ (56) + DH ?1 = m ( H ) 01 0 2 ?H?H

    The boundary conditions are 0 0

    µ (H ) = 0 , H=H(57)cr 0

     ?µ250 , H=0 (58) ?1 = D1 ?H 0

    豆丁网地址,/msn369

    豆丁网地址,/msn369

    Some numerical results for the reliability function and mean first passage time of

    uncontrolled and optimally controlled system (47) for different fractional order are shown in 豆丁网地址,/msn369

Report this document

For any questions or suggestions please email
cust-service@docsford.com