DOC

Systems of Simultaneous Inequalities

By Mark Austin,2014-06-20 19:01
12 views 0
Systems of Simultaneous Inequalities

Fourth LACCEI International Latin American and Caribbean Conference on Engineering and Technology (LACCEI ‘2006)

    Breaking Frontiers and Barriers in Engineering: Education, Research and Practice”, 21-23 June 2006, Mayaguez, Puerto Rico

    Systems of Inequations

    Jeffrey L. Duffany, Ph.D.

    Universidad del Turabo

    Gurabo, PR USA

    jduffany@suagm.edu

    Abstract

    A system of inequations is equivalent to the compatibility problem which is one of the best known and classic examples of np-complete problem. A solution method is described which is analogous to solving systems of equations using substitution and elimination of variables. A decision function is used to determine which variables to combine. The solution method can be implemented using set-theoretical operators such as union and intersection or using a combination of modular and traditional arithmetic operators. The performance of the basic method is shown to be significantly improved when used in a variation called the equivalence class subset algorithm.

    1 Introduction

    A system of inequations is represented by a set of n variables and a set of compatibilities involving all xi

    pairs of variables. For example suppose n=2 and the system is represented by inequation (1): (x,x)ij

     (1) x(x12

For this example and is a solution as is and , since both satisfy every x1x2xpxnp1212

    inequation in the system. In general, the solution to a system of inequations is drawn from an arbitrary set of distinct elements. There are an infinite number of solutions to this system but a solution s* is called optimal only if the number of elements in the solution set is minimum over all possible solution sets {s}. In this case the cardinality of the optimal solution k*=max(s*) is 2 since s*={1,2} is a solution and it is not possible to solve this system with a set of lower cardinality. The cardinality of a solution k=max(s) is

    not the same as the cardinality of the solution |s| which is always equal to the dimension of the system

    (i.e., the number of variables “n”). A system of inequations can be represented by a binary symmetric square matrix A, with a zero representing a compatibility and a one an incompatibility as in equation (2).

    01?) (2) A??10(

The matrix A in equation (2) is a representation of the system of inequations. There are always x(x12

    two ones in the A matrix for every inequation representing both and . A system of x(xx(xijji

    inequations can also be represented by inequation (3):

     (3) Ax(x

if the addition operator “+” in the vector product is replaced by the union () set operator. The variable

     is represented as the first row of column vector x and is represented by the second row of column xx12

    vector x. Analogous to a matrix representation of equations, inequation (3) represents a system of inequations of the form . Note that “”in inequation (3) is interpreted differently from (x(xx...ijk

    standard usage when either side of the expression is a set of variables. In this case the “” symbol (

    distributes over each element of the set. In other words, is equivalent to inequations x(xxijk

    and. Another way to express this would be using the set-theoretical (not a member of) x(xx(x~ijik

    symbol. However the symbol will be used to emphasize the similarity between systems of equations (

    and systems of inequations. Systems of inequations are equivalent to the well-known compatibility problem[Clay,2005] which is categorized as NP-hard[Weisstein, 2005a][Horowitz, 1978]. 2 Substitution and Elimination of Variables

    A solution technique for systems of inequations can be defined which is analogous to the technique used for systems of equations, also known as the method of substitution and elimination of variables. This method always finds a feasible solution “s” but the solution that it finds may or may not be optimal. An example of substitution of variables is given by the following pair of inequations (4) where variable is x1

    used to substitute for and eliminate variable: x4

x(xx123 (4) x(xxx?1235x(xx435

Since there is no constraint that in the pair of inequations (4) it is possible to set x = xx and x(x11414

    eliminate x (the symbol “” representing the union operator). The union operator can be considered the 4

    set-theoretical equivalent of the addition operator. The combined inequations assert that x ? x and that 12

    x ? x and that x ? x. An optimal solution is given by s*=(1,2,2,1,2). The substitution of x for x 131514

    results in x=x=1 and x=x=x=2 in the optimal solution. The variables have been partitioned into two 14235

    sets {x,x} and {x,x,x}. Since k*=2 any optimal solution will partition the variables into two sets each 14235

    of which is referred to as an equivalence class or independent set. This leads to the following observation.

Observation: For any system of inequations any two variables that are in the same equivalence class of

    any optimal solution can be associated to create a new system of inequations which has the same optimal solution cardinality k* as the original system.

    Combining variables in this fashion will always lead to a feasible but not necessarily optimal solution. This process can be continued until eventually for all pairs of variables . At this point x(x(x,x)ijij

    either an optimal solution s* has been found or a feasible but not optimal solution s(k>k*) has been found. Back substitution is used to find the solution for the original system.

    For every system of n inequations there is at least one optimal solution of minimum cardinality k* and exactly one trivial solution s(n)={1,2,3…n}. The trivial solution s(n) represents the association of each

    variable to a different integer, i{1,2,3,..n}. In the system the unique optimal solution s* xix(xi12

    and the trivial solution s(n) are the same s*=s(n)=(1,2). It should also be noted that in this case the number of solutions of cardinality less than k*=2 and greater than n=2 is zero. In general, any solution that is not optimal has cardinality k>k* and can be referred to as a suboptimal solution. For any given system there can be a large number of suboptimal solutions for k*

    The number of pairs of variables that can be substituted for one another in a given system is an integer between 0 and n(n-1)/2 inclusive. Finding an optimal solution for a system of inequations is equivalent to having a decision function that can for any A matrix select two variables that are in the same equivalence class of some optimal solution s* in {s*}. For any system of inequations the set X of off-diagonal zeros of the matrix A represent the set of all pairs of variables that can be combined into equivalence classes X={xi,xj | a=0, ij}. If |X|=0 (X={}) then the system is called a complete system. For any system that (ij

    is not complete there is always at least one pair of variables that when combined leads to an optimal solution. If |X|=n(n-1)/2 then all variables are in the same equivalence class and X=X, where X is the ecec

    subset of X corresponding to an optimal or suboptimal solution “s” X={xi,xj|xi,xj in s}. ec

    Finding an optimal solution for a system of inequations can be accomplished by finding a mapping function f(X) which partitions the set X= {xi,xj | aij=0} into two mutually exclusive subsets XX=X, ac

    X?X={}. The set X represents the subset of X for which the variables xi and xj can be found in the acc

    same equivalence class of at least one optimal solution. The set X represents the subset of X for which a

    the variables (xi,xj) are never found in the same equivalence class of any optimal solution. The subscripts c and a of X and X are abbreviations of the words chromatic and achromatic. If two variables are ac

    combined and the cardinality of an optimal solution of the new A’ matrix is the same as the original A

    matrix then the operation it can be said to represent a chromatic transformation. If not then it can be said

    to be an achromatic transformation. Without loss of generality the mapping function f(X) can be referred to as the decision function f(A). For example the decision function f(A) could take every xX and map ij

    each x into a 1 or a 0, depending on whether association will lead to an optimal solution or not. ij

    3 Canonical Form

    Consider a system of inequations in 5 variables as shown in equation 5(a). In a manner similar to what was done for equation (4) an optimal solution is easily found as s*=(1,2,1,2,1). There is only one unique optimal solution s* which partitions the variables into two equivalence classes {x,x,x} and {x,x}. 13524

    0100000010?)?)

    ????1010000011????

    ????A = 0101000001 (a) A* = (b) (5)

    ????0010111000????

    ????0001001100((

The matrix A* given in equation 5(b) is a permutation of the variables of A that corresponds to the * partitioning into the equivalence classes of s*. Note that Ais in block diagonal form with square *submatrices of zeros of dimension 3x3 and 2x2 along the main diagonal. The matrix A resembles the echelon form used in solving systems of equations. Itcan be regarded as a canonical form in that any

    system of inequations can always be represented in this form. The canonical form can also be regarded as a spectral decomposition of the matrix A by a permutation matrix P as in equation (6):

     * -1 A= PAP (6)

    If the variables are ordered such that the last one of each equivalence class cannot combine with the first variable of the next, the canonical form A* has the property that an optimal solution s* can be determined by repeatedly choosing and associating (i.e., combining) variables whose subscript differs by one (x,x) ii+1

    for i{1,2,...n1). In example 5(b) this would require switching the last two rows. Using this technique the canonical form A* can be used to uniquely represent an optimal solution vector s*.

    From Equation 5(a) it is seen that the off-diagonal zeros represent the set X and that there are six ways of choosing pairs of the 5 variables xij, ij ,ij, for the purpose of substitution and elimination. Of (1,2,..5

    the six exactly four belong to X={x,x,x,x} and exactly two belong to X={x,x}. In equation 5(b) c13153524a1425

    the set X corresponds to zeros inside the diagonal blocks and the set X corresponds to zeros outside the ca

    diagonal blocks. In general systems that have more than one solution will have members of X outside of c

    the diagonal blocks.

    4 K-partite Systems

    The matrix A* in Figure 5(b) is an incomplete bipartite system due to the zeros outside of the diagonal blocks. A complete bipartite system has all ones outside of the diagonal blocks. Similarly, a complete k-partite system has a unique optimal solution s* which partitions the variables into k* equivalence classes and when permuted into canonical form has all ones outside the diagonal blocks. A complete k-partite system is represented by the symbol K where the values p represent the number of variables in each p1p2..pki

    equivalence class. This is illustrated in equation (7) for the complete k-partite system K. 322

    0001111?)

    ??0001111??

    ??0001111

    ?? = (7) K1110011322??

    ??1110011??1111100??

    ??1111100(

The system K is an overconstrained system. Removing any single constraint will create a different 322

    system K' which has the same unique optimal solution as the original system. In general more than one 322

    constraint can be removed from a complete k-partite system without changing the optimal solution s*.

Let A be any system of inequations with an optimal solution vector s*. Let the cardinality of the

    equivalence classes of s* be given by (p,p…p). Then there is a complete k-partite system K that 12kp1p2..pk

    has the same optimal solution vector s* as A which can be derived from A by changing all zeros to ones everywhere outside the block diagonals of the canonical form of the A matrix. For example refer to equation 5(b) which has two zeros outside the block diagonal. Either or both of these zeros can be changed to ones creating a new and more overconstrained system of inequations with the same unique optimal solution vector s*. If both zeros are changed to one it results in the complete bipartite system K 32

    having the same optimal solution vector s*=(1,1,1,2,2).

    5 Decision Functions

    A decision function f(A) is a function which imposes an ordering on the set X of pairs of variables that reflects the likelihood they belong to the same equivalence class of an optimal solution. For example the decision function f(A) = A for a system of inequations can be considered an ordering of the set X where each pair of variables is given the same likelihood or probability of being in the same equivalence class since for f(A) = A, aij=0 for all xij in X. Since this decision function provides no information regarding the ordering of the pairs it implies that to find an optimal solution all of the possibilities must be searched. The decision function f(A) = A can be considered an ambivalent or neutral decision function.

    On the opposite extreme is the perfect decision function which will always choose two variables (xi,xj) of a system of inequations that are in the same equivalence class of an optimal solution s*. However even if a perfect decision function does not exist it may be possible to find an imperfect decision function such that a particular xijX is more likely than another to be in the same equivalence class of some s* in{s*}.

    The existence of even an imperfect decision function could imply the existence of an algorithm for finding an optimal solution without having to search all of the possibilities.

    The canonical form shown in equations 5(b) and (7) represent the variables of a system partitioned into equivalence classes. In particular, it is seen that in any complete k-partite system such as Kthe rows of 322

    the matrix are identical for each variable in any equivalence class. If each row is considered as a vector each pair of variables in the same equivalence class share the same set of constraints. In other words the intersection (logical AND) of the two sets of constraints is the same for all pairs of variables in the same equivalence class. This coincidentally is equal to the inner product of the two vectors representing the rows of the A matrix as indicated in equation (8).

xi xj = | xi xj | (8)

    To calculate the intersection of the set of constraints of every variable with every other variable in a system of inequations matrix multiplication can be used. In other words equation (8) can be generalized 2for every pair of variables in the system by using the square of the A matrix (A). For example equation

    (9(b)) shows the result of applying equation (8) to all pairs of variables (xi,xj) in the complete k-partite system of Equation (7) by squaring the matrix. K322

    44422220001111?)?)

    ????44422220001111????

    ????44422220001111

    ????2 = (a) = (b) (9) 2225533KK1110011322322????

    ????22255331110011????22233551111100????

    ????11111002223355((

    2The value of for all pairs within an equivalence class is either 4 or 5 and that this value is greater K3222than that of any pair chosen from two different equivalence classes. The decision function f(A) = max(A)

    is a perfect decision function for all complete k-partite systems regardless of optimal solution cardinality

    k* or system dimension n {n| n>0, n->}. For systems which are not complete k-partite the pair of 2variables (xi,xj) corresponding to the maximum value f(A) = max(A) is not guaranteed to be in the same 2equivalence class of an optimal solution {s*}. Note that the decision function f(A)=max(A) is global in

    the sense that it makes use of information about every possible choice of variables {(xi,xj) |

    (i,j){1,2,3....n}} when making a decision. A geometrical interpretation of the decision function 2f(A)=max(A) is that it chooses two points that are as close together as possible without touching.

     2For another example of the decision function f(A) = max(A) consider a system of n=100 variables

    generated at random with solution cardinality k*= max(s*) = 3. The matrix A for this system was squared and the intersection between every (xi,xj) pair in X is shown in Figure 1. The solid line represents the distribution of intersection between pairs of variables in the same equivalence class X as determined ec

    from the canonical form of the A matrix corresponding to s*. The dashed line in Figure 1 represents the

    Xdistribution of the intersection between pairs of variables in different equivalence classes . ec

    same equivalence class

    different equivalence classes

    number of xi xj pairs

    050100150

    020406080

    intersection of xi and xj

     2Figure 1. Result of applying f(A) = A to a random system of 100 variables

    Figure 1 corresponds to a matrix of dimension n=100 that has ten thousand aij values (3740 zeros and 7260 ones). Of the 3740 zeros 100 are on the main diagonal leaving a total of |X| = 3640 off of the main diagonal. Of the 3640 a total of X = 3302 were inside a diagonal block and X = 338 were outside any ecec

    diagonal block of the canonical form of A corresponding to an optimal solution vector s*. Even though the A matrix was not complete k-partite it was close enough to be in the region of convergence of the

    2Xdecision function f(A) = max(A) since it separated X into two sets X and whose ranges did not ecec

    overlap. In other words the intersection of any pair of variables in the same equivalence class of s* was greater than that of any pair of variables in different equivalence classes of s*. Note that sets X and Xecec

    Xare not equivalent to X and X. All x in Xof s*are in Xwhile x in may be in either X or X. acijec c ijecac

    In general the perfect separation of distributions in Figure 1 will not be the case for all systems of inequations. In some cases there will be a perfect separation or partial overlap but in other cases there will be a complete overlap of the distributions. To make a correct decision it is not necessary for all the values 22of A within an equivalence class to be greater than every A value for variables in different equivalence

    classes. To make a correct decision it is necessary that one pair of variables in an equivalence class of an optimal solution s* have an intersection greater than the largest intersection of any two variables that are not in the same equivalence class of any optimal solution {s*}.

6 Algorithm

    Using a decision function f(A) an algorithm can be derived for solving systems of inequations (Figure 2). The algorithm is recursive. It takes as input an nxn matrix A and reduces it to an (n-1)x(n-1) system, calling itself to solve the reduced system. The solution cardinality k is the dimension of the reduced A matrix when no more variables can be combined.

ineq(A)

    ij ;f(A)

    if {ij}={} return A

    xi=xixj

    A=A-xj

    ineq(A)

    Figure 2. Algorithm for solving systems of inequations

The decision function f(A) identifies a pair of variables and (also represented as ij or xij) that are xxji2likely to be in the same equivalence class. If using f(A) = max(A) then a given pair would be chosen

    because the intersection of their constraints was greater than or equal to that of any other pair of variables. The two variables xi and xj are combined using the set union operation (logical OR or modulo 2 addition) and then row i and column i of the A matrix are updated to the “sum” or union of xi and xj. The binary

    subtraction operator (-) is used to represent the elimination of the variable xj from the system (i.e., the removal of row j and column j from A). The algorithm terminates when the set {ij} is empty returning the same A matrix that it received as input with the solution indicated by rows and columns permuted into canonical form. The algorithm could also return the solution vector s as shown in Figure 3.

ineq(A)

    s={1,2,3,...n}

    2ij ; max(A)

    if ij ={} return s

    xi=xixj

    A=A-xj

    s[s=j]=i

    s[s>j]=s[s>j]-1

    ineq(A)

     2Figure 3. Algorithm with f(A) = max(A) and solution vector s

    The first step in the algorithm of Figure 3 is to initialize the solution vector to the trivial solution s(n)={1,2,3,...n}. The algorithm then steps through a series of feasible solutions. Each time the dimension of the matrix is reduced by one the number of equivalence classes in the solution vector s is also reduced by one. The solution vector is updated by taking all variables that currently have solution value j and assigning a new solution value i. All variables that currently have a solution greater than j have their current value reduced by one. For this to work properly the subscripts {ij} must be chosen such that iThere must also be a deterministic stopping criterion and a deterministic method of choosing a unique xij when more than one pair of variables has the same maximal intersection. The convention that has been 22adopted uses the upper triangular part of the A matrix since A is symmetric. If there are more than one

    pair of variables in {ij} choose the one with the lowest i and then the lowest j value. This uniquely determines i and j in any step of the algorithm and assures that any program implementation will go through the exact same deterministic steps. This also ensures that icalculated properly. The stopping criterion is that the set {ij} is empty ({ij}={}) .

7 Solution Space

    The system space {A} is the set of all binary symmetric zero-diagonal matrices that represent any system of inequations. The solution space S of a decision function can be defined as the subset of all systems {A} where the method provides an optimal solution S={A| f(A) ? s*)} where s* is an optimal solution of

    cardinality k*. The solution space of a decision function can be estimated using simulation as illustrated 2in Figure 4 for f(A) = max(A).

    **************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************constraint density (%)***********************************************************************************************************020406080100**********************020406080100*

    equivalence class (k)

     2Figure 4. Solution Space S for f(A) = max(A)

    Figure 4 was generated using the algorithm of Figure 2 for 5000 systems of dimension n=100 variables using uniform distributions of optimal solution cardinality k* and constraint density (/((n)*(n-1)). aij?

    The points in Figure 4 represent optimal solutions and the asterisks represent suboptimal solutions. Of the 5000 systems of inequations represented in Figure 4 there are approximately 4500 points representing where an optimal solution was found (see Table 1) all falling into a semi-contiguous region. Systems for which an optimal solution was not found can be visualized as teeter-tottering on the edge of indecision, located in somewhere in-between the regions of absolute convergence of a number of complete k-partite "attractors". Finding the optimal solution s* for the general case where 1<=k*<=n is classified as NP-hard[Weisstein, 2005a][Horowitz, 1978]. Finding an optimal solution s* for the case of three equivalence classes (k*=3) is a special case which is classified as both NP-hard and NP-complete[Weisstein, 2005b]. A solution for either the general case 1<=k*<=n or the NP-complete special case of k*=3 would provide a solution for the entire class NP complete [Cook, 2005], [Karp, 1972].

    1 1 1 0 xi xj

    1 0 1 1

    0 0

    22Figure 5. Venn Diagram illustrating decision functions A, A, and AA

     2The decision function f(A) = max(A) is only one of many that have been identified based on the powers

    2of the matrix A and its complementA. For example f(A) = max(A) is the decision function based on the

    number of zeros in common (i.e., the intersection of non-constraints). This decision function can be thought of as maximizing the number of variables in each equivalence class or independent set. Another decision function minimizes the number of constraints added to a variable when f(A)min(AA)

    associating it with another variable. Figure 5 illustrates these decision functions for two variables =(1,1,1,0,1,0,1) and =(1,1,1,0,0,1,0) whose first 4 elements agree and last 3 elements disagree. The xxji2intersection of constraints or number of ones in common for xi and xj corresponding to f(A) = max(A)

    2= 3. Similarly, for f(A) = max(A) the number of zeros in common is x= 1. If the xxxjjii

    constraints of are added to the number of constraints of is increased by 2 whereas if the xxxjji

    constraints of are added to the number of constraints of is increased by 1 so xxxjji

    m m(min(x,) = 1). A generating function for decision functions is given by K= (A+A) xxxjjii

    where K is the complete system (k* = n). For m=2 these decision functions are related by equation (10).

    2222 (10) K(A;A)A;AA;AA;A

    It is also possible to use a power series expansion to generate decision functions as in equation (11).

    if(A)cA (11) ?i0i

Each set of c values generates a unique decision function. Note that both (10) and (11) include as a subset inthe set of decision functions f(A) = A .

    Table 1. Estimated percent solution space S for various decision functions

    1000 n=10 20 30 50 100 200 300 500 decision function 21. max(A) 99.5% 98.6% 96.8% 93.2% 90.0% 87.6% 86.1% 87.4% 86.8%

    291.3% 80.9% 73.7% 66.1% 63.4% 60.2% 58.5% 62.0% 56.7% 2. max() A

    3. min(A) 99.9% 98.4% 96.5% 91.3% 89.2% 88.8% 88.3% 86.6% 87.2% A

    24. min(A(*A)) 99.9% 99.4% 97.2% 94.6% 92.6% 90.3% 90.0% 88.9% 88.1% A

    5. combined (1-4) 100% 100% 98.8% 96.4% 94.6% 92.5% 90.8% 89.8% 89.3%

    72.4% 61.2% 57.3% 56.8% 55.7% 57.9% 56.9% 56.5% 55.2% 6. f(A)=A

    Table 1 shows the estimated solution space of various decision functions for systems of dimension between 10 and 1000. In general the success rate drops off with increasing n but stabilizes to nearly a

    22constant value as n increases. The success rate of f(A)=max() is lower than f(A)=max(A) while that A

    of f(A)=min(A) is about the same. The decision function with the best success rate A

    2f(A)=min(A(*A)) is a variation on f(A)=min(A) which weights the matrix using scalar AAA2multiplication (*) with A. The effect of this weighting is that it discourages combining two variables if doing so would prevent one of them from combining with another variable that it is highly correlated with. The solution space for each decision function overlaps with the others but not completely so that using more than one decision function increases the overall success rate. The row of Table 1 labeled “combined” shows that calculating the first four decision functions and using the best result increases the solution space S to almost 90% for systems as large as n =1000. The last row shows the result of the ambivalent or neutral decision function f(A)=A. The resolving ability of any decision function can be measured by calculating the ratio if its success rate to that of the ambivalent decision function.

    Consider any bipartite system in canonical form such as B in equation (12). Any bipartite system will have two blocks of zeros on the main diagonal so the inner product of any two variables from different equivalence classes will always be zero. If the variables are from different equivalence classes the 2intersection will not be equal to zero. Therefore f(A) = max(A) solves all bipartite and acyclic systems.

    0000111?)

    ??0000011??

    ??0000110

    ??B = (12) 0000111??

    ??1011000??1111000??

    ??1101000(

     2Observation: the decision function f(A) = max(A) solves all systems of inequations with k*=2. This

    includes all incomplete and complete bipartite systems and all acyclic systems. In addition, it solves all complete k-partite systems and all systems close enough to complete k-partite to be within its region of 2convergence as in Figure 1. The solution space of f(A) = max (A) includes systems of dimension n?.

    Observation: whenever the set of constraints of a variable xi are a subset of the constraints of another variable xj the variable xi is redundant and can be combined with xj without changing the cardinality of

    Athe optimal solution k*. This is the basis of the decision function f(A)=min(A) which minimizes the

    number of new constraints added to a variable when it is combined with another variable.

Report this document

For any questions or suggestions please email
cust-service@docsford.com