TXT

ANewGroupingGeneticAlgorithmfortheQuadratic

By Ann Flores,2014-04-16 01:30
12 views 0
ANewGroupingGeneticAlgorithmfortheQuadratic

A New Grouping Genetic Algorithm for the Quadratic

    Multiple Knapsack Problem

Alok Singh1 and Anurag Singh Baghel2

    1 J. K. Institute of Applied Physics and Technology, Faculty of Science, University of Allahabad, Allahabad 211002, UP, India

alok@jkinstitute.org

    2 Department of Electronics and Communication, Banasthali Vidyapith Jaipur Campus, Sarojini Marg, Jaipur 302001, Rajasthan, India

anuragsbaghel@yahoo.com

    Abstract. The quadratic multiple knapsack problem is an extension of the well known 0/1 multiple knapsack problem. In the quadratic multiple knapsack problem, profit values are associated not only with individual objects but also with pairs of objects. Profit value associated with a pair of objects is added to the overall profit if both objects of the pair belong to the same knapsack. Being an extension of the 0/1 multiple knapsack problem, this problem is also NP-Hard. In this paper, we have proposed a new steady-state grouping genetic algorithm for the quadratic multiple knapsack problem and compared our results with two recently proposed methods a genetic algorithm and a stochastic hill climber. The results show the effectiveness of our approach.

    Keywords: Combinatorial optimization, grouping genetic algorithm, knapsack problem, quadratic multiple knapsack problem.

1 Introduction

    The quadratic multiple knapsack problem (QMKP) is an extension of the well known 0/1 multiple knapsack problem (MKP). In MKP we are given a set of n objects, and K knapsacks. Each object i, i ? {1, 2, …, n}has profit pi and weight wi. Each knapsack j,

    j ? {1, 2, …, K} has a capacity Cj. The multiple knapsack problem consists in selecting

    the K disjoint subsets of objects to be put into the K knapsacks such that the total weight of objects in subset j should not exceed Cj and the overall profit of all the selected objects is as large as possible. By introducing binary variables xij to indicate

whether object i is included in knapsack j (xij =1) or not (xij =0), the MKP can be

    formulated as:

nK

Maximize P = ΣΣxij pi

i =1 j =1

Subject to

n

xw ? C , j = 1, 2,... K

?

    iji j

    i=1

    C. Cotta and J. van Hemert (Eds.): EvoCOP 2007, LNCS 4446, pp. 210 218, 2007. . Springer-Verlag Berlin Heidelberg 2007

    A New Grouping Genetic Algorithm for the Quadratic Multiple Knapsack Problem 211

K

Σxij ? 1, i = 1,2,..., n

j=1

?{0,1} i = 1,2,..., nj = 1,2,..., K

xij

    MKP is NP-Hard as for K=1 it reduces to 0/1 Knapsack Problem [1]. QMKP is an extension of MKP. The only difference between QMKP and MKP is that in QMKP profit values are associated not only with individual objects but also with pairs of

    objects. The profit value pij associated with a pair of objects i and j is added to the

    over all profit, if both object i and j belong to the same knapsack. Therefore QMKP seeks to maximize

nK n.1 nK

    P = ΣΣxij pi + ΣΣ?

    xik xjk pij

    i=1 j=1 i=1 j=i+1 k =1

QMKP is NP-Hard as it reduces to MKP when all pij are zero.

    Though MKP is widely studied and a number of evolutionary algorithms have been proposed for it [2, 3, 4], QMKP is only recently defined and studied by Hiley and Julstrom [5]. They considered a restricted version of QMKP in which all capacities Cj

    are same. They presented three methods . a greedy heuristic, a generational genetic algorithm and a stochastic hill climber for their version of QMKP. On the test instances

    considered, stochastic hill climber obtained better solution on average followed by genetic algorithm and greedy heuristic. Genetic algorithm performs better on instances with small K, but its performance decline sharply as K grows. Hereafter this genetic algorithm will be referred to as HJ-GA and stochastic hill-climber as HJSHC.

    Hiley and Julstrom [5] cited a practical application of QMKP in the situation where a manager has to select persons for multiple projects, each with its own budget, at the same time. The manager knows the salary of each person and, the productivity of each person, both individually and in pairs. Obviously, the manager will try to

    assign persons to projects in such a way that maximizes the overall productivity without

    exceeding the budget of any project.

    Clearly QMKP is a grouping problem [6, 7], i.e. a problem that seeks an optimal assignment of objects according to a given cost function into different groups subject to some constraints. Therefore when designing a genetic algorithm for this problem, genetic operators should be designed in such a way that these operators should try to

    preserve grouping information as far as possible while generating new chromosomes [6, 7]. The genetic algorithm for the QMKP that this paper describes is designed with exactly the aforementioned idea. Falkenauer [6, 7] named such type of genetic algorithms

    as grouping genetic algorithms. Like Hiley and Julstrom [5], we also assume that all knapsack capacities Cj are same and are equal to C. We have compared our genetic algorithm with the genetic algorithm and the stochastic hill climber proposed by Hiley and Julstrom [5]. The results show the effectiveness of our approach.

    This paper is organized as follows: Section 2 describes our grouping genetic algorithm.

    Computational results are presented in section 3, whereas section 4 outlines some conclusions.

212 A. Singh and A.S. Baghel

2 The Grouping Genetic Algorithm

    We have developed a steady-state grouping genetic algorithm (SSGGA) for the QMKP. Steady-state genetic algorithm uses steady-state population replacement method [8]. In this method genetic algorithm repeatedly selects two parents, performs crossover and mutation to produce a single child that replaces a less fit member of the

    population. This is different from generational replacement, where a new population of children is created and the whole parent population is replaced. The steady-state population replacement method has an advantage over generational method due to the fact that the best solutions are always kept in the population and the child is immediately

    available for selection and reproduction. Thus we can possibly find better solutions quicker. Moreover with steady-state population replacement method we can easily avoid the multiple copies of the same individual in the population. In the generational

    approach multiple copies of the same individual can exist in the population. Though these individuals are usually the best individuals, they can rapidly dominate the whole population. In this situation, no further improvement in solution quality is

    possible without mutation, and often, a much higher mutation rate is required to get further improvements. In the steady-state approach the child can be easily checked against the existing population members and if it is identical to any existing individual

    in the population then it is discarded. In this way the problem of premature convergence

    is deterred by disallowing the multiple copies of the same individual in the population. The main features of SSGGA are described below. However, before describing

    the main features of SSGGA, we need to define the concept of relative value density [5]. The relative value density of an object i with respect to a set S of objects

    is the sum of its profit value pi and all profit values pij such that j ? S divided by

    its weight.

2.1 Chromosome Representation

    Chromosome is represented as set of knapsacks i.e. there is no ordering among the knapsacks. With such a representation there is no redundancy. Every solution is represented

    uniquely.

2.2 Fitness

    Fitness of a chromosome is equal to the overall profit of the solution it represents.

2.3 Crossover

    Our crossover operator is derived from the crossover operator proposed in [9] for the

    one-dimensional bin-packing problem. Our crossover operator consists of two phases. First phase iteratively builds the child chromosome. During each iteration, it selects

    one of the two parents uniformly at random and copies the knapsack with largest profit value from the selected parent to the child. Then it deletes all the objects belonging

    to this knapsack from both the parents and profit values of the knapsacks of

    A New Grouping Genetic Algorithm for the Quadratic Multiple Knapsack Problem 213

    the parents are updated accordingly. This process is repeated K times. The second phase iteratively tries to include as many unassigned objects as it can into the knapsacks

    without violating the capacity constraints. During each iteration it selects a knapsack at random and adds to it the unassigned object that fits and has highest relative

    value density with respect to the objects already in the selected knapsack. This process is repeated until it becomes impossible to add any more objects to any of the

    knapsacks.

    We have used binary tournament selection to select the two parents for crossover, where more fit candidate is selected with probability pbetter.

    Similar to [5, 9] here also crossover and mutation is used in a mutually exclusive manner, i.e. each child is generated by either the crossover operator or the mutation operator but never by both. Crossover is applied with probability pc, otherwise mutation

    is used.

2.4 Mutation

    (高亮注释 Administrator

    2012/2/2 11:53:10

    空白)

    The mutation operator removes some of the objects from knapsacks. Then it proceeds similar to the second phase of crossover operator. We have used 3-ary tournament selection to select a chromosome for mutation, where the candidate with better fitness is selected with probability 1.0. 3-ary tournament selection is used because more fit chromosome has greater chance of generating a better chromosome after mutation.

2.5 Replacement Policy

    The generated child is first tested for uniqueness against the existing population members.

    If it is unique then it always replaces the least fit member of the population, otherwise it is discarded.

2.6 Initial Population Generation

    Each member of the initial population is generated using a procedure that is derived from greedy heuristic proposed in [5]. The procedure used here differs from the greedy heuristic in that it selects first object of each knapsack randomly from the list

    of unassigned objects rather than the unassigned object with highest relative density with respect to the list of unassigned objects. Initially all the knapsacks are empty. The procedure fills each knapsack one by one. The first object of the knapsack is selected randomly as already described. Then objects are added to the knapsack iteratively.

    During each iteration, an unassigned object that fits and has highest relative value density with respect to the objects already in the knapsack is added to the knapsack.

    This process is repeated until it becomes impossible to add any more objects to the knapsack. After this the next knapsack is filled in the same way. The whole process is repeated until all the knapsacks have been filled.

    Each newly generated chromosome is checked for uniqueness against the population members generated so far and if it is unique it is included in the initial population otherwise it is discarded.

214 A. Singh and A.S. Baghel

3 Computational Results

    SSGGA has been coded in C and executed on a Pentium 4 system with 512 MB RAM, running at 2.4 GHz under Red-Hat Linux 9.0. In all our experiments with SSGGA we have used pc = 0.6, pbetter = 0.8. Mutation try to remove each object allocated to a knapsack with probability (2×K/nobj), where nobj is the number of objects

    allocated to knapsacks. With this probability mutation will remove on an average two objects per knapsack. The mutation operator of HJ-GA [5] also removes two objects from each knapsack. The population size of SSGGA is equal to n, the number of objects in the test instance. All the parameter values were chosen after large number of trials. These parameter values provide good results, although these values are in no

    way optimal for all problem instances. We have tested SSGGA on the same 60 QMKP instances as used by Hiley and Julstrom [5]. These instances are characterized by three things . the density d (proportion of non-zero pij), number of objects n, and

    the number of knapsacks K. For every instance, the knapsack capacities are set to 80% of the sum of instance's object's weights divided by K. For these instances d is

    either 0.25 or 0.75, n is either 100 or 200 and K can take any value from {3, 5, 10}. There are 5 instances for a particular d, n and K, resulting in a total of 60 instances. Originally these instances are the instances of the quadratic knapsack problem and are

    available at http://cermsem.univ-paris1.fr/soutif/QKP/QKP.html. SSGGA was executed 40 times on each instance, each time with a different random seed. During each run both HJ-GA and HJ-SHC generate 20000 candidate solutions. Therefore to allow a fair comparison with HJ-GA and HJ-SHC, SSGGA also generates 20000 candidate solutions.

    Tables 1 and 2 compare the performance of SSGA with HJ-GA and HJ-SHC. Table 1 reports the performance of three algorithms on instances with d = 0.25, whereas table 2 reports the same for d = 0.75. Data for HJ-GA and HJ-SHC are taken from [5]. For each instance, tables 1 and 2 report the number of objects n, the number of knapsacks

    K in it, its number and knapsacks capacity C. For each of the three approaches on each instance the tables report the best and average value of the solution, the standard

    deviation of solution values and average execution time in seconds.

    Tables 1 and 2 clearly show the superiority of SSGGA over HJ-GA and HJ-SHC. Average solution values obtained by SSGGA are always better than those obtained with HJ-GA and HJ-SHC. In comparison to HJ-GA, the best solution of SSGGA is

    better on 56 instances and worse on 4 instances. The best solution of SSGGA is better than that of HJ-RHC on 57 instances and worse on 3 instances.

    HJ-GA and HJ-SHC were executed on Pentium 4, 2.53 GHz system with 1 GB RAM, whereas SSGGA was executed on Pentium 4, 2.4 GHz system with 512 MB RAM. Therefore it is not possible to exactly compare the running times of SSGGA with those of HJ-GA and HJ-SHC. However, on instances with K = 3, SSGGA is clearly slower to HJ-GA and HJ-SHC, whereas on instances with K = 10, it is clearly faster. On instances with K = 5, running times of SSGGA are less in comparison to HJ-SHC and roughly the same in comparison to HJ-GA. The running time of HJ-GA and HJ-SHC increases with increase in K, whereas that of SSGGA decreases. The

Report this document

For any questions or suggestions please email
cust-service@docsford.com