Stochastic Optimization in Finance

By Adam Austin,2014-08-08 12:28
14 views 0
Stochastic Optimization in Finance

    Stochastic Optimization in Finance

     Krastyu Gumnerov

     Institute of Information Technologies B A S


    The financial activity like many other activities has two characteristics:

    1) The decision-making is under uncertainty, e.g., it depends on the future values of parameters, unknown at the moment of the decision-making, so they are random quantities with respect to the information at the moment.

    2) The decisions are optimal with respect to some objective.

    Thus it is natural a system for financial planning (portfolio management) to have two modules:

    ? 1. a module describing the random quantities of the model and their

    evolution (scenario generator).

    ? 2. an optimization module for given objective function and variables


    This review examines some methods for building the second module. The

    purpose of this review is to offer a brief description of the main approaches for dynamic stochastic optimization and to show some of their applications in finance. The author hopes in this way to attract the attention of the Bulgarian experts engaged in financial mathematics to these approaches unpopular (up to now) in Bulgaria.

I. General Statement of the Dynamic Stochastic Optimization Problem

    (Stochastic Control)

    To understand the stochastic control it is useful to keep in mind the analogy with some more simple problems:

    1) The elementary problem of finding the conditional extremum of a function;

    2) The problems of the calculus of variations and the variational approach in the

    classical mechanics and mathematical physics;

    3) The problems of the deterministic optimal control.

     In general, the solution of these problems, including the stochastic control problems, is reduced to some optimality conditions, in the form of equations (more often). These are the equations of the considered system: they describe the evolution of the parameters defining the system. Characteristic examples are: the equation for the stationary points of a function of numerical variables, the Kun-Taker equations, the Euler-Lagrange equations in the calculus of variations, the equations of the mathematical physics, the Hamilton-Jacobi equation, the Hamilton-Jacobi-Bellman equation.

    The advantage of this approach consists in the fact that it allows to see sometimes the patterns of the system in consideration, it permits sometimes a qualitative investigation of the solutions of the problem. The stochastic programming actually

gives us methods for numerical solution of these equations, but in principle other

    methods are also possible and in some special cases it is possible to obtain solutions in

    explicit form. An example is the Merton problem.

    In the following we give a brief presentation of a basic theorem of the stochastic

    control and as well a simplified variant of Merton problem.

    Assume the system in question is described in a probability space (?, F, P) t

    with the Ito process of the form u(1) dX = dX = b(t, X, u)dt + ? (t, X, u)dB tttttttnnnnnxmwhere X ? R, b: R ? R ? U ? R, : R ? R ? U ? R, B is the m-dimensional ?ttkBrownian motion and u ? U ? R is a parameter, whose values in a given set U we can t

    choose at each moment t to control the system. Thus u = u(t, ?) is also a stochastic t(m)process, F -adapted. It will be called “control”. ts,xLet {X} be the solution of (1), where X / = x, i.e., h?shtt=shhs,x,,sxsxX = x + b(r,X,u)dr?(r,X,u)dB,h?s. ??rrrrrhssnnLet L: R ? R ? U ? R and K: R ? R ? R be given continuous functions, G ? R s,xnˆT? R be a domain and let be the first exit time after s from G for the process {X} r?sh


    s,xs,xˆˆTT?()inf{r:rs,(r,X(?)?G} r

    We define the quantity “performance”

    ˆTusx,ˆJ(s,x)E[L(r,X,u)drK(T,X)?](2) ?ˆˆrrTT?{}s

    To simplify the notations we introduce

    s,xˆY(st,X) for . t?0,Y(s,x)y,TT(stsc0

    Then (1) becomes

    udYdYb(Y,u)dt?(Y,u)dB ttttttt

    The problem is for each y ? G to find a control

     u* = u*(t,?) = u*(y,t,?), so that u* uJ (y) = supJ(y)


    u*The function (y) = J(y) is called optimal performance.

    The optimality condition is given by equations, satisfied by the function (y)

    and formulated in the theorem that follows: 2nFor v ? U and g ? C(R ? R ) we define 02nnggg???vDgyybyvyayv ?,()()(,)()(,)??iiji1i,j1sxxx????jij

    1Ta(??)where ijij2nkFor each choice of the function u: R ? R ? R the operator A, defined by 2 u(y)n(Ag)(y) = (Dg)(y), g ? C(R ? R) 0

    is an infinitesimal generator of the process Y, which is the solution of the equation t


    dY = b(Y, u(Y))dt + (Y, u(Y))dB. ?tttttt

    2Theorem 1 ([30]). Let the function be bounded and belong to C(G) ? C(); Let T G

    < ? a.s. for each y ? G and let exist an optimal control u*. Then

    v(3) sup{L(y, v) + D(y)} = 0, y ? G



     (y) = K(y), y ? ?G

    The supremum in (3) is abtained when v = u*(y), where u* is an optimal control,


    u*(y)L(y, u*(y)) + D(y)} = 0, y ? G.

    The equation (3) is called Hamilton-Jacobi-Bellman equation (HJB).

    Example 1. Portfolio optimization ([30]). There are given the assets pand p with price processes, satisfying the equations 1 2

    dp1adt(dB,ba,(4) . tp1

    dp2bdt,ba,(5) p2

    At the moment t let X be the investor’s wealth. He divides it into two parts: uX and (1 ttt

     u)X, 0 ? u < 1. With the part uX he buys the assets p, and with the part (1 u)X - ttttt1tt

    uXttthe assets p . In this way he composes the portfolio, which contains an amount of 2p(t)1

    ((1u)Xttthe asset pand an amount of the asset p . The portfolio price increase dX 1 2tp(t)2

    will be

    uX(1(u)XuX(1(u)Xtttttttt(dXdpdppadtpdBpbdt?,t1211t2pppp 1212


    At the initial moment s < T the investor’s wealth X is determined = X. Let the s

    performance N be an increasing concave function of the wealth X, for example trN(X) = X, 0 < r < 1. tt

    The investor has an investment horizon T and he wants trading without loan to maximize the performance at the moment T , more exactly to maximize the quantity uus,x J(s, x) = E[N(X)], ?s,xwhere E is the expectation with respect to the probability law (the conditional

    probability density) of the process, which at the moment s has the value x; ? is the first exit time from the domain G = {(t, x): t<T, x>0}.

    The problem is to find a function (s, x) and a stochastic (markovioan) process

    uu*, 0 ? u* < 1, to satisfy the conditions (s, x) = sup { J(s, x), u is a markovian ttu*process, 0 ? u < 1}, (s, x) = J( s, x).


To solve this problem we compose the Hamilton-Jacobi-Bellman (HJB) equation. The

    voperator is D

    2?f?f?f1222vDftxxavb(vvx()(,)((1))(. 2?t?x2?x

    The equation HJB is

    v?sup??(D)(t,x)0за(t,x)?G,?(6) v??(t,x)N(x)fortT,(t,0)N(0)заtT.?

    From this equation for each (t, x) we find v=u(t, x), so that the function

    2??1?222v(v)Lx(b(a(b)v)vx((7) 2tx2???x

    has a maximum. The function (v) is a polynomial of second order, hence if

    2??0and0, it gets a maximum for 2?x?x

    ?a(b()?xvutx((8) (,). 2?2(x2?x

    Substituting (8) in (7) and (6) we obtain the following nonlinear boundary

    problem for (t, x)

    2???(a(b)?????x??(9) bx(02?t?x?2(22?x



    rLet N(x) = x, 0 < r < 1 and let look for (t, x) in the form

    r(t, x) = f (t, x).

    By substituting in (9) we get

    2a(br()()?t(xr?(10) txexbr(,),.2((r2(1)

    Now from (8) and (10) we obtain

    a(bu*(t,x)(11) 2((1(r)

    a(b01If (11) is the solution of the problem (u* is in fact a constant). 2((1(r)

    This result means that in the portfolio practical management, the investor invests

    a(ba(b,1(his capital at the initial moment in the proportion and he does not 22((1(r)((1(r)

    change it up to horizon T.

    If u*(t, x) depends on t and x the investor rebalances the portfolio at each moment t (practically in discrete moments), and at the moment t + dt he observes at the market the increases dp and dp of both assets prices, and calculates the increase 12


    ??dpdp12 of his wealth and composes a portfolio of both assets dXXutut()(1(())tt??ptpt()()12??**in proportion u, 1 u, where t+dtt+dt

    *u = u*(t+dt, X+ dX) = u*(t+dt, X) t+dtt tt+dt

Example 2. The Merton problem ([14], [23]). 01NThere are traded at the market N + 1 assets p, p, …, p, with the following price


    0dptrdt0ptiNdp0t(12) (p,t)dt?(p,t)dB,i1,...,N ?itijttipi1t1Np(p,...,p),rconst.ttt

    The investor has a horizon T and at an arbitrary moment t ? [0, T] he possesses a 01N01Nportfolio of the assets p, p, …, p , in quantities respectively , , …, , doing a ???ttt01Nconsumption with speed C ? 0 , where ?, ?, …, ?, C are stochastic processes. At ttttt

    the initial moment his wealth is X ? 0 and he trades and consumes without exterior 0

    incomes. That means that his wealth X at the moment t is tttNNiiii X?pX?dp(cd??0,t?[0,?].??0???ttt??01ii00

    It follows that

    NiidX?dp(cdt.(13) ?tttti0

    This is a “budget equation”.

    i?It is convenient instead of processes to introduce the processes t

    iiii??ppiitttt(,i0,...,N,(1, ?tNjjiXtp??ttj0i which represent the part of the wealth included in p , i.e. the proportion in which the t

    wealth is distributed in the assets.

    Substituting (12) in (13) the budget equation gets the form of

    NN??00iiij?????dXpdtpdBrpdt(cdt??iij??,,11ij?? NNiijo(Xdt?(XdBr(Xdt(cdt.??iij1,1iij


    NNNi0it????dXX(X(r(cdtX(?dB.(14) ???????iiji1j,,11i????

    The investor’s “performance” is defined by the consumption done in the period [0, ?] and by the wealth possessed at the moment ?. More exactly it is given by the formula


    Tt,x ??((,),ctxJ(t,x)EL(?,X;c?,()d?K(X),where E is the expectation with respect to ??????t??

    the probability law of the process X , which begins at the moment t with the value x. (c,a)The investor’s goal is by trading and consuming, to maximize J(t, x). Let

    def(c(,)(c*,(*)(t,x)supJ(t,x)J(t,x), (c,)(

     where c*, a* are the optimal consumption and investment strategy. The function (t, x)

    satisfies the equation HJB and the boundary condition

    (t, x) = K(x). ??

    Let us compose the equation HJB. The infinitesimal generator of the process according to (14) is

    22NNN???????1(,)02?(ii Dxxr(cx?((??(??.???iij2?t?x2?x1,,11????iji

    The equation HJB is



    To solve the problem we follow the procedure: having arbitrary fixed t, x, we calculate

    the supremum of the function

    N?(t,x)?0i??((((c,)xxr(c(t,x)???ii1???t?x 22NN1?i2??x(?(t,x)L(t,x;c,().????ij2,,11ji??2?x

    Setting equal to zero the derivatives with respect to c and a of the function (c,a), we

    obtain a linear system of equations, from which we obtain c and a as functions of

    2??ii(((,)иcc(,),, i.e. . Substituting in (c,a) we obtain the xxxxxx2?x?x


    (15) (c(, ),((, )) = 0, xxxxxx

     which is a nonlinear partial differential equation. s In the special case when L = 0 and K(x) = x, 0< s <1 the problem can be solved sexplicitly. We look for a solution of (15) in the form of Ф(t, x) = f(t)x and obtain

    ordinary differential equation of first order for f(t).

II. Two-Stage Stochastic Linear Programs with Fixed Recourse (2S-SLPR)

In the common case the stochastic programs are generalizations of the deterministic

    optimization programs, in which some uncontrollable data are unknown surely. Their

    typical features are many decision variables with many potential values, discrete time

    periods for the decisions, the use of expectation functionals for the goals and known (or

    partially known) distributions.

II.1. Formulation ([7])


The two-stage stochastic linear program with fixed recourse takes the following form:

    TT(16) ??minzcxEminq(?)y(?)?

    s. t. Ax = b,

    ??T()x + Wy( = h(), ?)

    , x?0,y(?)?0


    n1 c is a known vector in, R

    m1 b is a known vector in , R

     A and W are known matrices of size m x n, m x n. 1122

     W is called the recourse matrix, which we assume here is fixed.

    nm22????R??RFor each, Т() is m x n matrix, q() , h(). Piecing together the 21

    stochastic components of the problem, we obtain a vector: TTTT(?), with Nnmm x n components, where ?(?)(q(?),h(?),T(?),...,T(?))i22211m2

    ?is the i-th row of Т().

    E represents the mathematical expectation with respect to ?. ?

    A distinction is made between the first stage and the second stage. The first- stage

    decisions are represented by vector x. Corresponding to x are the first-stage vectors and

    ?matrices c, b and A. In the second stage, a number of random events (set of ??

    ?all random events) may realize. For a given realization , the second-stage problem

    ???data q(), h() and Т() become known. Each component of q, T and h is thus a

    possible random variable.

    nnLet also ??R be the support of , i.e. the smallest closed subset in R, such that ?


    ?As just said, when the random event is realized, the second-stage problem data ????q(), h() and Т() become known. Then the second-stage decisions y(, x) must be

    ?taken. The dependence of y on is of a completely different nature from the

    ?dependence of q or other parameters on . It is not functional but simply indicates that

    ?the decisions y are typically not the same under different realizations of . of this type

    we have two stages:

    - in the first stage we make a decision;

    - in the second stage we see a realizations of the stochastic elements of the problem but are allowed to make further decisions to avoid the constraints of the

    problem becoming infeasible.

    In other words, in the second stage we have recourse to a further degree of

    flexibility to preserve feasibility (but at a cost). Note particularly here that in this second

    stage the decisions that we make will be dependent upon the particular realization of the

    stochastic elements observed.

    TThe objective function of (16) contains a deterministic term cx and the

    Tq(?)y(?)expectation of the second-stage objective taken over all realizations of the

    ??random event. This second-stage term is the more difficult one because, for each,

    ?the value y() is the solution of a linear program.

    Using discrete distributions, the resulting model can be reformulated as a linear

    programming problem called deterministic equivalent program (DEP).


    Problem (16) is equivalent to the DEP:


    (18) s.t. Ax = b,

     x ?0,


    ˆ(19) , Q(x)EQ(x,?(?))?

    ?and for a given realization ,

    T Q(x,?(?))min{q(?)y/Wyh(?)(T(?)x,y?0}y

    is the second-stage value function.

II.2. Feasibility Sets ([7])

    The expected value of the function of the second stage is given in (19). Because of its

    importance for the application, let’s go into the situation, when ? is finite discrete random variable, and precisely ? ? with a finite or countable set. ??

    The second-stage value function is then the weighted sum of the Q(x, ?) values for the various possible realizations of ?.

    Let K = {x/Ax = b, x ? 0} be the set determined by the fixed constraints, namely 1

    those that do not depend on the particular realization of the random vector, and let

    ˆK{x,Q(x)?}be the second-stage feasibility set. We may now redefine the DEP as 2




     Let K (?) = {x|Q(x, ?) < +?} be the elementary feasibility sets and 2pK = { x/? y ? 0 s.t. Wy = h T.x} = K() ???;22???

    The following is valid:

    Theorem 2. а) For each ?, the elementary feasibility set is a closed convex polyhedron, phence the set K is closed and convex. 2pb) When is finite, then K is also polyhedral and coincides with K. ?22

    Theorem 3. When W is fixed and ? has finite second moments:

    (а) K is closed and convex. 2

    (b) If Т is fixed, K is polyhedral. 2

    (c) Let (Т) be the support of the distribution of Т. If h(?)and T(?) are ?

    independent and (Т) is polyhedral, then K is polyhedral. ?2

    Theorem 4. For a stochastic program with fixed recourse where ? has finite second moments,

    ˆQ(x) (а) is a Lipschitzian convex function and is finite on K . 2

    ˆQ(x)(b) When ? is finite, is piecewise linear.


    ˆQ(x)(c) If F(?) is an absolutely continuous distribution, is differentiable on K. 2

    The proofs of these theorems see in [7].

     These theorems allows us to propose without great efforts algorithms for solving 2S-SLPR (corresponding to the theorems’ conditions).

III. Multistage Stochastic Linear Programs with Recourse (MS-SLPR)

III.1. Formulation ([7])

    The multistage stochastic linear program with fixed recourse takes the following form:








    n11c is a known vector inR,

    m1tTtTtTt(1t(11 hR a known vector in , is a random N-?(?)(c(?),h(?),T(?),...,T)t1mtttt+1vector defined on (?, ?, P), (where ? ? ? for all

    tt = 2, …, H ) and each W is a known m ? n matrix. The decisions x depend on the tt tt?history up to time t, which we indicate by . We also suppose that ? is the support of

    t. ?

    A multistage stochastic program with recourse is a multi-period mathematical program where parameters are assumed to be uncertain along the time path. The term recourse means that the decision variables adapt to the different out- comes of random parameters at each time period. Different formulations of MS-SLPR are proposed in the literature, for example see [5] and [12].

    Using discrete distributions, the resulting model on every stage can be reformulated as a linear programming problem also called deterministic equivalent. In tthis way it is possible to present the realization of the random vector? by the so called

    scenarios tree. As an illustration, we present in Figure 1 an example of a scenarios tree.


     Figure 1. A tree of seven scenarios over four periods.

    We first describe the deterministic equivalent form of this problem in terms of a tt?dynamic program. If the stages are 1 to H, we can define the states as x(). Noting that

    the only interaction between periods is through this realization, we can define a dynamic programming type of recursion. For terminal conditions we have

    HH(1HHH????Q(x,())minc()x()(39) HHHH(1H(1Hs.t.Wx?()h?()(T(?)x,x(?)?0.

    t1tt1tt1ˆLetting, for all t, we obtain the recursion for Q(x)E[Q(x,?(?))]t1?

    t = 2, …, H 1,

    tt(1tttt1tˆ????Q(x,())minc()x()Q(x)(40) tttt(1t(1ts.t.Wx?()h?()(T(?)x,x(?)?0.

    twhere we use x to indicate the state of the system. Other state information in terms of the realizations of the random parameters up to time t, should be included if the

    ?distribution of is independent of the past outcomes.

     The value we seek is:

    11ˆQ(41) min z = cx + (x)

    1111s.t. Wx = h, x ? 0,

    which has the same form as the two-stage deterministic equivalent program.

    It is plain that an obvious extension to the above simple 2S-SLPR is to have more stages. In such cases it is common that:

    - the stochastic elements have a discrete distribution;

    - the realizations of the stochastic elements are represented as a number of scenario's of the future.


Report this document

For any questions or suggestions please email