TXT

A New Grouping Genetic Algorithm for the Quadratic

By Marjorie Lee,2014-06-01 06:01
9 views 0
A New Grouping Genetic Algorithm for the Quadratic

     A New Grouping Genetic Algorithm for the Quadratic

    Multiple Knapsack Problem

Alok Singh1 and Anurag Singh Baghel2

    1 J. K. Institute of Applied Physics and Technology, Faculty of Science, University of Allahabad, Allahabad ?C 211002, UP, India

alok@jkinstitute.org

    2 Department of Electronics and Communication, Banasthali Vidyapith Jaipur Campus,

    Sarojini Marg, Jaipur ?C 302001, Rajasthan, India

anuragsbaghel@yahoo.com

    Abstract. The quadratic multiple knapsack problem is an extension of the well

    known 0/1 multiple knapsack problem. In the quadratic multiple knapsack problem,

    profit values are associated not only with individual objects but also with

    pairs of objects. Profit value associated with a pair of objects is added to the

    overall profit if both objects of the pair belong to the same knapsack. Being an

    extension of the 0/1 multiple knapsack problem, this problem is also NP-Hard.

    In this paper, we have proposed a new steady-state grouping genetic algorithm

    for the quadratic multiple knapsack problem and compared our results with two

    recently proposed methods ?C a genetic algorithm and a stochastic hill climber.

    The results show the effectiveness of our approach.

    Keywords: Combinatorial optimization, grouping genetic algorithm, knapsack

    problem, quadratic multiple knapsack problem.

1 Introduction

    The quadratic multiple knapsack problem (QMKP) is an extension of the well known

    0/1 multiple knapsack problem (MKP). In MKP we are given a set of n objects, and K

    knapsacks. Each object i, i ?Ê {1, 2, ??, n}has profit pi and weight wi. Each knapsack j,

    j ?Ê {1, 2, ??, K} has a capacity Cj. The multiple knapsack problem consists in selecting

    the K disjoint subsets of objects to be put into the K knapsacks such that the total

    weight of objects in subset j should not exceed Cj and the overall profit of all the

    selected objects is as large as possible. By introducing binary variables xij to indicate

    whether object i is included in knapsack j (xij =1) or not (xij =0), the MKP can be

    formulated as:

nK

Maximize P = ????xij pi

i =1 j =1

Subject to

n

xw ?Ü C , j = 1, 2,... K

    iji j

    i=1

    C. Cotta and J. van Hemert (Eds.): EvoCOP 2007, LNCS 4446, pp. 210 ?C 218, 2007.

    . Springer-Verlag Berlin Heidelberg 2007

    A New Grouping Genetic Algorithm for the Quadratic Multiple Knapsack Problem 211

K

??xij ?Ü 1, i = 1,2,..., n

j=1

?Ê{0,1} i = 1,2,..., nj = 1,2,..., K

xij

    MKP is NP-Hard as for K=1 it reduces to 0/1 Knapsack Problem [1]. QMKP is an

    extension of MKP. The only difference between QMKP and MKP is that in QMKP

    profit values are associated not only with individual objects but also with pairs of

    objects. The profit value pij associated with a pair of objects i and j is added to the

    over all profit, if both object i and j belong to the same knapsack. Therefore QMKP

    seeks to maximize

nK n.1 nK P = ????xij pi + ?????Æ

    xik xjk pij

    i=1 j=1 i=1 j=i+1 k =1

    QMKP is NP-Hard as it reduces to MKP when all pij are zero.

    Though MKP is widely studied and a number of evolutionary algorithms have been

    proposed for it [2, 3, 4], QMKP is only recently defined and studied by Hiley and

    Julstrom [5]. They considered a restricted version of QMKP in which all capacities Cj

    are same. They presented three methods . a greedy heuristic, a generational genetic

    algorithm and a stochastic hill climber for their version of QMKP. On the test instances

    considered, stochastic hill climber obtained better solution on average followed

    by genetic algorithm and greedy heuristic. Genetic algorithm performs

better

    on instances with small K, but its performance decline sharply as K grows. Hereafter

    this genetic algorithm will be referred to as HJ-GA and stochastic hill-climber as HJSHC.

    Hiley and Julstrom [5] cited a practical application of QMKP in the situation

    where a manager has to select persons for multiple projects, each with its own budget,

    at the same time. The manager knows the salary of each person and, the productivity

    of each person, both individually and in pairs. Obviously, the manager will try to

    assign persons to projects in such a way that maximizes the overall productivity without

    exceeding the budget of any project.

    Clearly QMKP is a grouping problem [6, 7], i.e. a problem that seeks an optimal

    assignment of objects according to a given cost function into different groups subject

    to some constraints. Therefore when designing a genetic algorithm for this problem,

    genetic operators should be designed in such a way that these operators should try to

    preserve grouping information as far as possible while generating new chromosomes

    [6, 7]. The genetic algorithm for the QMKP that this paper describes is designed with

    exactly the aforementioned idea. Falkenauer [6, 7] named such type of genetic algorithms

    as grouping genetic algorithms. Like Hiley and Julstrom [5], we also assume

    that all knapsack capacities Cj are same and are equal to C. We have compared our

    genetic algorithm with the genetic algorithm and the stochastic hill climber proposed

    by Hiley and Julstrom [5]. The results show the effectiveness of our approach.

    This paper is organized as follows: Section 2 describes our grouping genetic algorithm.

    Computational results are presented in section 3, whereas section 4 outlines

some conclusions.

212 A. Singh and A.S. Baghel

2 The Grouping Genetic Algorithm

    We have developed a steady-state grouping genetic algorithm (SSGGA) for the

    QMKP. Steady-state genetic algorithm uses steady-state population replacement

    method [8]. In this method genetic algorithm repeatedly selects two parents, performs

    crossover and mutation to produce a single child that replaces a less fit member of the

    population. This is different from generational replacement, where a new populationof children is created and the whole parent population is replaced. The steady-state

    population replacement method has an advantage over generational method due to the

    fact that the best solutions are always kept in the population and the child is immediately

    available for selection and reproduction. Thus we can possibly find better solutions

    quicker. Moreover with steady-state population replacement method we can

    easily avoid the multiple copies of the same individual in the population. In the generational

    approach multiple copies of the same individual can exist in the population.

    Though these individuals are usually the best individuals, they can rapidly dominate

    the whole population. In this situation, no further improvement in solution quality is

    possible without mutation, and often, a much higher mutation rate is required to get

    further improvements. In the steady-state approach the child can be easily checked

    against the existing population members and if it is identical to any existing individual

    in the population then it is discarded. In this way the problem of premature convergence

    is deterred by disallowing the multiple copies of the same individual in the

    population. The main features of SSGGA are described below. However, before describing

    the main features of SSGGA, we need to define the concept of relative

value

    density [5]. The relative value density of an object i with respect to a set S of objects

    is the sum of its profit value pi and all profit values pij such that j ?Ê S divided by

    its weight.

2.1 Chromosome Representation

    Chromosome is represented as set of knapsacks i.e. there is no ordering among the

    knapsacks. With such a representation there is no redundancy. Every solution is represented

    uniquely.

2.2 Fitness

    Fitness of a chromosome is equal to the overall profit of the solution it represents.

2.3 Crossover

    Our crossover operator is derived from the crossover operator proposed in [9] for the

    one-dimensional bin-packing problem. Our crossover operator consists of two phases.

    First phase iteratively builds the child chromosome. During each iteration, it selects

    one of the two parents uniformly at random and copies the knapsack with largest

    profit value from the selected parent to the child. Then it deletes all the objects belonging

    to this knapsack from both the parents and profit values of the knapsacks of

null

population members.

    If it is unique then it always replaces the least fit member of the population,

    otherwise it is discarded.

2.6 Initial Population Generation

    Each member of the initial population is generated using a procedure that is derived

    from greedy heuristic proposed in [5]. The procedure used here differs from the

    greedy heuristic in that it selects first object of each knapsack randomly from the list

    of unassigned objects rather than the unassigned object with highest relative density

    with respect to the list of unassigned objects. Initially all the knapsacks are empty.

    The procedure fills each knapsack one by one. The first object of the knapsack is

    selected randomly as already described. Then objects are added to the knapsack iteratively.

    During each iteration, an unassigned object that fits and has highest relative

    value density with respect to the objects already in the knapsack is added to the knapsack.

    This process is repeated until it becomes impossible to add any more objects to

    the knapsack. After this the next knapsack is filled in the same way. The whole process

    is repeated until all the knapsacks have been filled.

    Each newly generated chromosome is checked for uniqueness against the population

    members generated so far and if it is unique it is included in the initial population

    otherwise it is discarded.

214 A. Singh and A.S. Baghel

3 Computational Results

    SSGGA has been coded in C and executed on a Pentium 4 system with 512 MB

    RAM, running at 2.4 GHz under Red-Hat Linux 9.0. In all our experiments with

    SSGGA we have used pc = 0.6, pbetter = 0.8. Mutation try to remove each object allocated

    to a knapsack with probability (2?ÁK/nobj), where nobj is the number of objects

    allocated to knapsacks. With this probability mutation will remove on an average two

    objects per knapsack. The mfrom each knapsack. The population size of SSGGA is equal to n, the number of objects

    in the test instance. All the parameter values were chosen after large number of

    trials. These parameter values provide good results, although these values are in no

    way optimal for all problem instances. We have tested SSGGA on the same 60

    QMKP instances as used by Hiley and Julstrom [5]. These instances are characterized

    by three things . the density d (proportion of non-zero pij), number of objects n, and

    the number of knapsacks K. For every instance, the knapsack capacities are set to

    80% of the sum of instance's object's weights divided by K. For these instances d is

    either 0.25 or 0.75, n is either 100 or 200 and K can take any value from {3, 5, 10}.

    There are 5 instances for a particular d, n and K, resulting in a total of 60 instances.

    Originally these instances are the instances of the quadratic knapsack problem and are

    available at http://cermsem.univ-paris1.fr/soutif/QKP/QKP.html. SSGGA was executed

    40 times on each instance, each time with a different random seed. During each

    run both HJ-GA and HJ-SHC generate 20000 candidate solutions. Therefore to allow

    a fair comparison with HJ-GA and HJ-SHC, SSGGA also generates 20000 candidate

Report this document

For any questions or suggestions please email
cust-service@docsford.com