You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

A General Framework for Providing Interval Representations of Pareto Optimal Outcomes for Large-Scale Bi- and Tri-Criteria MIP Problems

Abstract

The Multi-Objective Mixed-Integer Programming (MOMIP) problem is one of the most challenging. To derive its Pareto optimal solutions one can use the well-known Chebyshev scalarization and Mixed-Integer Programming (MIP) solvers. However, for a large-scale instance of the MOMIP problem, its scalarization may not be solved to optimality, even by state-of-the-art optimization packages, within the time limit imposed on optimization. If a MIP solver cannot derive the optimal solution within the assumed time limit, it provides the optimality gap, which gauges the quality of the approximate solution. However, for the MOMIP case, no information is provided on the lower and upper bounds of the components of the Pareto optimal outcome. For the MOMIP problem with two and three objective functions, an algorithm is proposed to provide the so-called interval representation of the Pareto optimal outcome designated by the weighting vector when there is a time limit on solving the Chebyshev scalarization. Such interval representations can be used to navigate on the Pareto front. The results of several numerical experiments on selected large-scale instances of the multi-objective multidimensional 0–1 knapsack problem illustrate the proposed approach. The limitations and possible enhancements of the proposed method are also discussed.

1Introduction

The derivation of optimal solutions to large-scale instances of the Mixed-Integer Programming (MIP) problem can be impossible within a reasonable time limit even for contemporary commercial MIP solvers, e.g. GUROBI (Gurobi, 2023), CPLEX (IBM, 2023). In this case, a MIP solver provides the optimality gap (MIP gap) that gauges the quality of the approximate solution, i.e. the last feasible solution (incumbent). This optimality gap is calculated based on the incumbent and the so-called MIP best bound.

In the case of the Multi-Objective MIP (MOMIP) problem, scalarization techniques and MIP solvers can be used to derive Pareto optimal solutions (see, e.g. Miettinen, 1999; Ehrgott, 2005). Examples of applying MIP packages to solve multi-criteria decision problems are shown in, e.g. Ahmadi et al. (2012), Delorme et al. (2014), Eiselt and Marianov (2014), Oke and Siddiqui (2015), Samanlioglu (2013). As a scalarization technique, one can use the Chebyshev scalarization that guarantees the derivation of each (properly) Pareto optimal solution (see, e.g. Kaliszewski, 2006). Other advantages of using this scalarization in the context of decision-making and expressing the decision maker’s preferences are discussed in, e.g. Miroforidis (2021).

In the current work, we say that an instance of the MOMIP problem is large-scale if its Chebyshev scalarization cannot be solved to optimality by a MIP solver within an assumed time limit that is reasonable in the decision-making process. The existence of this limit is justified in solving practical multi-criteria decision-making problems. When there is a time limit on deriving a single Pareto optimal solution, the Chebyshev scalarization of the instance may not be solved to optimality. The decision maker (DM) then obtains the incumbent, i.e. the approximation of the Pareto optimal solution, as well as the MIP gap of the single-objective optimization problem. However, based on this information, the quality of the approximation of a single component (namely its lower and upper bounds) of the Pareto optimal outcome, i.e. the image of the Pareto optimal solution in the objective space cannot be shown to the DM. And it is based on these components that the DM navigates on the Pareto front (set of Pareto optimal outcomes). Fortunately, there is a method to provide the DM with such lower and upper bounds in the literature.

In Kaliszewski and Miroforidis (2019), a general methodology for multi-objective optimization to provide lower and upper bounds on objective function values of a Pareto optimal solution designated by a vector of weights of the Chebyshev scalarization of a multi-objective optimization problem has been proposed. The bounds form the so-called interval representation of the Pareto optimal outcome. The DM can use interval representations instead of (unknown to him/her) Pareto optimal outcomes, to navigate on the Pareto front. To derive them, one needs the so-called lower shells and upper shells whose images in the objective space are finite two-sided approximations of the Pareto front (see, e.g. Kaliszewski and Miroforidis, 2014).

In Kaliszewski and Miroforidis (2022), it has been shown how to provide lower and upper shells to large-scale instances of the MOMIP problem. In that work, lower shells are composed of incumbents to the Chebyshev scalarization of the MOMIP problem derived within the time limit, and upper shells consist of elements that are solutions to the Chebyshev scalarization of a relaxation of the MOMIP problem.

However, there is a lack of an algorithmic method for deriving an upper shell that is necessary to calculate the interval representation of the Pareto optimal outcome designated by a given vector of weights of the Chebyshev scalarizing function. The idea of how to derive such useful upper shells for the MOMIP problem with two objective functions has been shown in our earlier works (Kaliszewski and Miroforidis, 2021) and (Miroforidis, 2021).

In the current work, we combine ideas from works (Kaliszewski and Miroforidis, 2019, 2021, 2022), and (Miroforidis, 2021). For this reason, our work is an incremental one. For the MOMIP problem with up to three objective functions, we propose an algorithmic method of deriving upper shells that can be used to calculate the interval representation of a single Pareto optimal outcome designated by a given vector of weights of the Chebyshev scalarizing function. This opens the way for providing the DM with this representation when there is a time limit for deriving a single Pareto optimal solution. Because of the need to derive the appropriate upper shells, additional time is needed for optimization, but as we show in numerical experiments, this time can be a fraction of the assumed time limit. To illustrate our method, we present results of several numerical experiments with selected large-scale instances of the multi-objective multidimensional 0–1 knapsack problem.

To our best knowledge, the method we propose is the only algorithmic method for determining the interval representation of the Pareto optimal outcome given by weights of the Chebyshev scalarizing function for large-scale instances of the MOMIP problem, assuming the existence of a time budget for optimization.

The main contribution of this article is summarized as follows:

  • We propose a generic framework for providing interval representations of Pareto optimal outcomes, designated by weights of the Chebyshev scalarization, of the MOMIP problem when there is a time budget for optimization.

  • We propose two algorithms, which are realizations of the framework.

  • We demonstrate the operation of these algorithms on computationally demanding large-scale instances of the MOMIP problem up to three objective functions.

  • We discuss possible directions for changing the framework to better adapt it to the realities of decision-making and the budgeting of calculations.

The current work is organized as follows. In Section 2, we formulate the MOMIP problem and we recall a method for the derivation of Pareto optimal solutions with the use of the Chebyshev scalarization. In Section 3, we briefly recall the theory of parametric lower and upper bounds. There, we also introduce the concept of the interval representation of the implicit Pareto optimal outcome as well as an indicator measuring its quality. In Section 4, we present two versions of an algorithm for deriving interval representations of implicit Pareto optimal outcomes. In Section 5, we conduct extensive numerical experiments, as well as discuss their results. In Section 6, we show the limitations of the proposed method, as well as discuss how to eliminate them. Section 7 contains some final remarks.

2Background

In this section, we formulate the MOMIP problem, and we recall a method for the derivation of Pareto optimal solutions with the use of the Chebyshev scalarization.

Let X:=Rn1×Zn2, n1+n2=n, n2>0, xX denote a solution, X0:={xX|gp(x)bp,bpR} the set of feasible solutions, where gp:RnR, p=1,,m, m1. The MOMIP problem is defined as follows:

(1)
vmaxf(x)s.t.xX0,
where f:RnRk, f=(f1,,fk), fl:RnR, l=1,,k, k2, are objective functions, and “vmax” is the operator of deriving set N that contains all Pareto optimal solutions in X0. The set Rk is called the objective space. Solution x¯X0 is Pareto optimal, if for any xX0, fl(x)fl(x¯), l=1,,k, implies f(x)=f(x¯). If fl(x)fl(x¯), l=1,,k, and f(x)f(x¯), then x dominates x¯ (x¯ is dominated) which is denoted by the relation xx¯. We say that element f(x), xX0, is the outcome of x. Set f(N) is called the Pareto front.

According to well-established knowledge (Ehrgott, 2005; Kaliszewski, 2006; Kaliszewski et al., 2016; Miettinen, 1999), solution x is Pareto optimal (actually, x is properly Pareto optimal, see, e.g. Ehrgott, 2005; Kaliszewski, 2006; Kaliszewski et al., 2016; Miettinen, 1999) if and only if it solves the Chebyshev scalarization of problem (1), namely

(2)
minmaxlλl(ylfl(x))+ρek(yf(x))s.t.xX0,
where weights λl>0, l=1,,k, ek=(1,1,,1), yl=yˆl+ε, yˆl=maxxX0fl(x), yˆl<, l=1,,k, ε>0, and ρ is a positive “sufficiently small” number.

The linearized version of problem (2) is the following.

(3)
minss.t.sλl(ylfl(x))+ρek(yf(x)),l=1,,k,xX0.
In the following, we will assume that Pareto optimal solutions come from solving problem (3) with varying λ=(λ1,,λk).

Given λ, xPopt(λ) denotes the implicit Pareto optimal solution designated by λ that is a solution which would be derived if problem (2) with λ were solved to optimality. f(xPopt(λ)) denotes the implicit Pareto optimal outcome designated by λ.

3Lower and Upper Bounds on Components of Implicit Pareto Optimal Outcomes

This section contains a brief description of the general theory of lower and upper bounds on components of implicit Pareto optimal outcomes proposed in Kaliszewski and Miroforidis (2019).

To calculate the bounds, one needs two finite sets (that satisfy certain properties) namely a lower shell (SLX0) and upper shell (SURn).

Given λ, SL, and SU, the theory provides formulas for calculating lower and upper bounds on fl(xPopt(λ)), l=1,,k. That is,

(4)
Ll(SL,λ)fl(xPopt(λ))Ul(SU,λ),l=1,,k.
Formulas for lower bounds Ll(SL,λ) and upper bounds Ul(SU,λ) are shown in Kaliszewski and Miroforidis (2022). In that work, all those elements of the theory of lower and upper bounds that are required to understand the rest of the current work are presented in a synthetic way.

In addition, and of great relevance to the current work, the theory specifies that only elements xSU appropriately located with respect to the vector of lower bounds L(SL,λ):=(L1(SL,λ),,Lk(SL,λ)) can provide upper bounds Ul¯({x},λ)=fl¯(x) for fl¯(xPopt(λ)) for some l¯. This is specified by the following lemma defined in Kaliszewski and Miroforidis (2019).

Lemma 1.

Given lower shell SL and upper shell SU, suppose xSU and Ll¯(SL,λ)fl¯(x) for some l¯ and Ll(SL,λ)fl(x) for all l=1,,k, ll¯. Then x provides an upper bound for fl¯(xPopt(λ)), namely fl¯(xPopt(λ))fl¯(x).

Let S¯USU be a set of elements fulfilling Lemma 1 for some l¯{1,,k}. If S¯U, then each xS¯U can provide an upper bound on fl¯(xPopt(λ)), and Ul¯(SU,λ)=minxS¯UUl¯({x},λ). If S¯U=, then Ul¯(SU,λ)=yˆl¯. In Section 5, we show how to set Ul¯(SU,λ) when yˆl¯ is not known.

Fig. 1

Components of R(SL,SU,λ): □, ∘ – images of lower shell SL and upper shell SU elements, respectively, in the objective space, ▲ – vector of lower bounds, ■ – vector of upper bounds.

Components of R(SL,SU,λ): □, ∘ – images of lower shell SL and upper shell SU elements, respectively, in the objective space, ▲ – vector of lower bounds, ■ – vector of upper bounds.

Further on, U(SU,λ):=(U1(SU,λ),,Uk(SU,λ)) denotes the vector of upper bounds.

3.1The Interval Representation of the Implicit Pareto Optimal Outcome

Given λ, SL, and SU, the interval representation of f(xPopt(λ)) is R(SL,SU,λ)=([L1(SL,λ),U1(SU,λ)],,[Lk(SL,λ),Uk(SU,λ)]).

For k=2, components of the interval representation of f(xPopt(λ)), lower and upper shells, as well as vectors of lower and upper bounds, are illustrated in Fig. 1.

To gauge the quality of R(SL,SU,λ), we calculate

(5)
GPsub,l(λ):=100×Ul(SU,λ)Ll(SL,λ)Ul(SU,λ),l=1,,k.

GPsub(R(SL,SU,λ)):=(GPsub,1(λ),,GPsub,k(λ)) forms the Pareto suboptimality gap of interval representation R(SL,SU,λ).

4Providing Interval Representations of Implicit Pareto Optimal Outcomes

In this section, we develop a generic framework for providing interval representations of Pareto optimal outcomes, designated by weights of the Chebyshev scalarization, of the MOMIP problem when there is a time limit for optimization.

Given λ, we assume that there is a time limit TL on solving problem (2) by a MIP solver. We also assume that if the MIP solver can not derive the solution to (2), i.e. xPopt(λ), within TL, then it provides incumbent INCλ that is the approximation of xPopt(λ). In this case, our goal is to provide R(SL,SU,λ) calculated on some lower shell SL and on some upper shell SU.

4.1The Derivation of Lower and Upper Shells

As in Kaliszewski and Miroforidis (2022), we will use SL:={INCλ} as a valid lower shell one can use to calculate Ll(SL,λ), l=1,,k.

To populate upper shell SU, the following two lemmas defined in Kaliszewski and Miroforidis (2022) can be used.

Lemma 2.

Given λ, solution x to the relaxation of problem (2) with X0, X0X0, is not dominated by solution x to problem (2) for any λ.

Lemma 3.

If x is a Pareto optimal solution to the relaxation of problem (1) with X0, then set {x} is an upper shell to problem (1).

Given X0X0 and λ, let ChebRLX(X0,λ) denote the Chebyshev scalarization (problem (2)) of some relaxation of the MOMIP problem (problem (1)) with feasible set X0 for some λ. Based on Lemmas 2 and 3, one can derive a single-element upper shell by solving ChebRLX(X0,λ). Given X0X0, the sum of such single-element upper shells derived for different vectors λ forms upper shell SU.

The surrogate relaxation of problem (1) with X0(μ):={xX|p=1mμpgp(x) p=1mμpbp} instead of X0, where μp0, p=1,,m, μ0, is a valid relaxation of this problem (see Kaliszewski and Miroforidis, 2022). μ is a vector of surrogate multipliers. We will use this type of relaxation with μ as a parameter to derive elements of an upper shell. We also follow what has been shown in Kaliszewski and Miroforidis (2022) on large-scale instances of the MOMIP problem that for a given μ solving the Chebyshev scalarization of the surrogate relaxation of the MOMIP problem by a MIP solver is much easier than solving the Chebyshev scalarization of the MOMIP problem.

Given λ and SL, based on Lemma 1, element x of upper shell SU is a source of an upper bound on fl¯(xPopt(λ)), l¯{1,,k}, when f(x) is appropriately located with respect to the vector of lower bounds L(SL,λ). In Miroforidis (2021), an idea of how to derive an upper shell that consists of an element useful to calculate upper bounds on fl¯(xPopt(λ)) has been proposed. This idea is to probe the objective space by perturbing components of vector λ. Yet, there is no algorithmic approach in Miroforidis (2021) doing that. In the current work, we try to fill in this gap.

Fig. 2

Deriving upper shell SU1, whose element xλ is a source of an upper bound, U1, for f1(xPopt(λ)) with some λ : ∘ – image of upper shell SU1 in the objective space, ▲ – vector of lower bounds.

Deriving upper shell SU1, whose element xλ‴ is a source of an upper bound, U1, for f1(xPopt(λ)) with some λ : ∘ – image of upper shell SU1 in the objective space, ▲ – vector of lower bounds.

For k=2, the idea of an algorithm for deriving upper shell SU1 whose some element is a source of an upper bound on f1(xPopt(λ)) is shown in Fig. 2. Let us assume that vector μ is given. At the beginning, SU1:=. Below, we describe three iterations of this example.

Iteration 1. We set the first probing vector λ:=(λ1+δ,λ2δ), λ2>0, for some δ>0 (as a probing vector, we exclude λ because we expect that the corresponding solution will not be properly located with respect to the vector of lower bounds, see Kaliszewski and Miroforidis, 2021). Let xλ be the solution to ChebRLX(X0(μ),λ). SU1:=SU1{xλ}. Based on Lemma 1, xλ is not a source of an upper bound on f1(xPopt(λ)) because f(xλ) is not appropriately located with respect to the vector of lower bounds L=L(SL,λ). Hence, we CONTINUE.

Iteration 2. We set λ:=(λ1+δ,λ2δ), λ2>0. Let xλ be the solution to ChebRLX(X0(μ),λ). SU1:=SU1{xλ}. Based on Lemma 1, xλ is not a source of an upper bound on f1(xPopt(λ)). Hence, we CONTINUE.

Iteration 3. We set λ:=(λ1+δ,λ2δ), λ2>0. Let xλ be the solution to ChebRLX(X0(μ),λ). SU1:=SU1{xλ}. Based on Lemma 1, xλ is a source of an upper bound on f1(xPopt(λ)). So, U1=U(SU1,λ)=f1(xλ). Hence, we STOP.

As elements xλ, xλ, and xλ are Pareto optimal solutions to the relaxation of the MOMIP problem with X0(μ), SU1 is a valid upper shell.

To obtain an upper bound on f2(xPopt(λ)), we need to derive upper shell SU2. To do this, in the first iteration, we set λ:=(λ1δ,λ2+δ), λ1>0, and proceed in the same way.

Given l¯{1,,k}, the FindUpperShell algorithm tries to derive upper shell SUl¯ whose element x is a source of an upper bound on fl¯(xPopt(λ)), i.e. Ul¯(SUl¯,λ).

infor549_g003.jpg

In Line 2, we set step size δ that is used to modify components of consecutive probing vectors λ. Parameter γ>0 controls the step size, i.e. the greater the value of parameter γ, the denser the sampling of the objective space to search for the desired element of the upper shell. In the main loop (Lines 4–16), we populate upper shell SUl¯ checking if its new element fulfills conditions of Lemma 1 to be a valid source for the upper bound on fl¯(xPopt(λ)). The algorithm stops when λl¯1 OR some component of λ is negative OR an element that fulfills conditions of Lemma 1 is found. Lines 6 and 9 guarantee that l=1kλl=1. The exit condition of the “while” loop ensures that λl>0, l=1,,k.

If no element of SUl¯ satisfies conditions of Lemma 1, then the algorithm simply returns the upper shell as well as the only available upper bound on fl¯(xPopt(λ)), namely yl¯. Otherwise, an upper bound better than yl¯ is returned, as well as the upper shell.

4.2Calculating Interval Representations

Based on the above elements, an interval representation of the Pareto optimal outcome given by vector λ can be calculated with the use of the Chute algorithm. Along with the interval representation, this algorithm also returns lower and upper shells that were determined during its operation. In Section 5.7, we explain why the algorithm also returns lower and upper shells.

Line 4 of the algorithm needs clarification. Vector μ can be set as shown in Kaliszewski and Miroforidis (2022), namely by taking μp:=1, p=1,,m. In that work, all surrogate multipliers have the same value. We call this version of the Chute algorithm Chute1.

infor549_g004.jpg

Yet, in Kaliszewski and Miroforidis (2022), in the section “Final remarks”, it has been suggested that “Tighter bounds might be obtained with other values of the multipliers. This possibility is worth exploring in future works.”. Unfortunately, there is no idea there how to select a vector of surrogate multipliers other than (1,,1)Rm. However, we can use the theory of duality for this purpose.

Given μ, λ, let x be the solution to ChebRLX(X0(μ),λ), and s(μ) be the objective function value of x. Based on Lemmas 2 and 3, {x} is a valid upper shell. Let s be the objective function value of the solution to problem (2) with λ and X0. It is a well-known fact (see, e.g. Glover, 1965, 1968) that ss(μ). Hence, for a given λ and μ, s(μ) is a lower bound on values of s.

Given λ, the best (highest) lower bound s on values of s is the objective function value of the solution μ to the following surrogate dual problem

(6)
supμ0,μ0{minxX0(μ)maxlλl(ylfl(x))+ρek(yf(x))}
that is connected to the Chebyshev scalarization (problem (2)). Solving (6) to optimality can be time-consuming. Yet, a suboptimal vector of multipliers μ˜ can be determined instead of μ. It can be done with the help of a quasi-subgradient-like algorithm (we shall call it Suboptimal) by Dyer (1980) with the following stopping condition.

Number of iterations without improving the value of the objective function in problem (6) is greater than NORtime limit on optimization is greater than TS seconds”.

In the current work, we set time limits on computation, hence the above stopping condition is justified in practice.

We will use vector (1,,1)||(1,,1)||Rm, where ||.|| is the Euclidean norm, as an initial vector of surrogate multipliers in the Suboptimal algorithm. Under the above assumptions, this algorithm has three parameters, namely λ, N, and TS, i.e. in pseudocode, it can be used as a function Suboptimal(λ,N,TS), which returns a suboptimal vector of multipliers μ˜.

We shall call a version of the Chute algorithm that uses (in Line 4) the Suboptimal algorithm to set vector of surrogate multipiers μ for a given λ Chute2. It has two additional input parameters N and TS.

Let us note that in the Chute2 algorithm, we set the vector of surrogate multipliers once for a given λ. The FindUpperShell algorithm uses perturbations of the λ vector to sample the objective space, and for all these perturbations the same vector μ is used. It is our heuristic assumption that even using the same vector μ for various vectors λ, that are close to λ, the Chute2 algorithm is able to find a better R(SL,SU,λ), so tighter upper bounds on components of f(xPopt(λ)), than the Chute1 algorithm. However, it is at the cost of increasing the computation time relative to Chute1 by at most TS. We will check it experimentally in the next section.

Fig. 3

The idea of deriving upper shell {x(μ˜)} whose element is a source of a better upper bound on f2(xPopt(λ)) than the element of upper shell {x(μI)}.

The idea of deriving upper shell {x(μ˜)} whose element is a source of a better upper bound on f2(xPopt(λ)) than the element of upper shell {x(μI)}.

The idea (for k=2 and ρ=0) of using suboptimal values of surrogate multipliers to get an upper shell that is a source of a better upper bound is illustrated in Fig. 3. Let μI:=(1,,1)Rm, and μ˜ be the output of the Suboptimal algorithm (with some N and TS) for some λ. x(μI) is the solution to optimization problem (2) with X0 replaced with X0(μI), and s(μI) is the objective function value of x(μI). x(μ˜) is the solution to optimization problem (2) with X0 replaced with X0(μ˜), and s(μ˜) is the objective function value of x(μ˜). Vector of lower bounds L is marked with a triangle. Both elements f(x(μ˜)) and f(x(μI)) are appropriately located with regards to L (see Lemma 1) to be sources for an upper bound on f2(xPopt(λ)). As s(μ˜)>s(μI) (contours of the Chebyshev metric for both values s(μ˜) and s(μI) are shown by solid thin lines) as well as f1(x(μ˜))<f1(x(μI)) AND f2(x(μ˜))<f2(x(μI)), element x(μ˜) is a source of a better upper bound on f2(xPopt(λ)) than element x(μI), as upper bounds are calculated with the use of components of the upper shell elements. In Fig. 3, f(x(μ˜)) is closer to the (unknown) Pareto front (represented by the solid curve) than element f(x(μI)). It could happen that condition f1(x(μ˜))<f1(x(μI)) AND f2(x(μ˜))<f2(x(μI)) does not hold. In this case, if f2(x(μ˜))f2(x(μI)), we obtain no better upper bound on f2(xPopt(λ)). On the other hand, if f1(x(μ˜))f1(x(μI)) and still f(x(μ˜)) is appropriately located with regards to L (see Lemma 1) to be a source for an upper bound on f2(xPopt(λ)), we get a better upper bound on f2(xPopt(λ)).

5Computational Experiments

In this section, we present the results of two experiments where we apply algorithms Chute1 and Chute2 presented in Section 4.2 to selected instances of the Multi-Objective Multidimensional 0–1 Knapsack Problem (MOMKP) with two and three objective functions. The instances are demanding for modern MIP solvers.

5.1Multi-Objective Multidimensional 0–1 Knapsack Problem

For k>1, the MOMKP is formulated in the following way.

(7)
vmaxf1(x):=j=1nc1,jxjfk(x):=j=1nck,jxjs.t.xX0:={x|j=1nap,jxjbp,p=1,,m,xj{0,1},j=1,,n},
where all ap,j, cp,j are non-negative. In Kaliszewski and Miroforidis (2022), it has been explained why the MOMKP is NP-hard.

5.2Test Instances of the MOMIP Problem

As tri-criteria instances of the MOMKP, we take two instances from Kaliszewski and Miroforidis (2022) that were generated based on the 1st problem of the 6th group (n=500, m=10) of multidimensional 0–1 knapsack problems, as well as on the 1st problem of the 9th group (n=500, m=30) (both single-objective problems are stored in Beasley OR-Library, http://people.brunel.ac.uk/~mastjjb/jeb/info.html). We call these tri-criteria instances Three6.1 and Three9.1, respectively.

By removing the third objective function of problem Three6.1, we create a bi-criteria instance called Bi6.1. Analogously, by removing the third objective function of problem Three9.1, we create a bi-criteria instance called Bi9.1.

Bi6.1, Bi9.1, Three6.1, and Three9.1 are our test instances of the MOMIP problem.11

5.3Experimental Setting

Gurobi (version 10.0.0) for Microsoft Windows (x64) is our selected MIP solver. The optimizer is installed on the Intel Core i7-7700HQ-based laptop with 16 GB RAM.

To be consistent with limiting optimization time, we do not derive element yˆ to optimality. So it is not known. Instead, we separately maximize each objective function of problem (7) within the time limit equal to 400 seconds. For instances Three6.1 and Three9.1, we set yl, l=1,2,3, to the best upper bound (provided by the MIP solver) on values of the respective objective functions (none of these maxima the MIP solver determined in this time limit). Thus, for Three6.1, y:=(128872,131116,131738), and for Three9.1, y:=(119379.88,119365,118122). Obviously, yˆl<yl. Hence, such y approximating yˆ can be used in (2) as well as when calculating lower and upper bounds. To avoid redundant calculations, for instances Bi6.1 and Bi9.1, we take only the first two components of respective y.

Table 1

Vectors λ and lower bounds for test problems Bi6.1 and Bi9.1.

No.λL(SL,λ) for Bi6.1L(SL,λ) for Bi9.1
10.0550.945114253.29130251.56104466.45118482.17
20.1160.884116707.61129508.69107215.43117756.82
30.7330.267125690.15122399.79116288.83110899.20
40.3970.603122075.81126638.06112806.06115033.24
50.4390.561122514.80126139.05113385.01114671.51

We set TL:=1200 seconds, ρ:=0.001. The absolute lower bound on values of all objective functions of the MOMKP is 0 (see Section 5.1), so this value is used for calculating lower bounds Ll(SL,λ),l=1,,k (for details, see Kaliszewski and Miroforidis, 2022).

For instances Bi6.1 and Bi9.1, we generate one set of five vectors λ uniformly sampled from two-dimensional unit simplex (see Smith and Tromble, 2004), and obtain corresponding vectors of lower bounds L(SL,λ) (Table 1). For instances Three6.1 and Three9.1, we generate separate sets of five vectors λ, uniformly sampled from the three-dimensional unit simplex, and obtain corresponding vectors of lower bounds L(SL,λ) (Tables 2 and 3, respectively).

Table 2

Vectors λ and lower bounds L(SL,λ) for test problem Three6.1.

No.λL(SL,λ)
10.1870.7700.043118622.54128616.7887944.84
20.5210.3240.155124156.03123541.43115957.65
30.0670.6800.25390480.66127282.50121460.00
40.3590.2950.346120988.48121527.94123559.14
50.1360.0780.786115623.82108141.30129431.77
Table 3

Vectors λ and lower bounds L(SL,λ) for test problem Three9.1.

No.λL(SL,λ)
10.3510.3510.298111876.06111861.18109288.07
20.2430.1430.614110262.24103915.65114504.59
30.2780.4940.228110549.31114387.77107363.36
40.1790.4710.350105139.63113934.39110819.31
50.4070.0140.579112514.840.00113292.80

For all problem instances, for no vector λ, the selected MIP solver derived the solution to problem (2) in the assumed TL=1200 seconds. Thus, the use of the Chute algorithm to determine interval representations of Pareto optimal outcomes given by vectors λ is justified.

We conduct two numerical experiments. In experiment 1, we test the behaviour of algorithm Chute1. In experiment 2, we test the behaviour of algorithm Chute2. In both experiments, on generated test instances, we test the behaviour of the Chute algorithm for γ:=10,30,50. Given λ, we check the impact of parameter γ on components of GPsub(R(SL,SU,λ)).

In the tables with results of the experiments, the meaning of the columns is as follows.

  • U(SU,λ) – components of the vector of upper bounds;

  • GAPPsub% – components of GPsub(R(SL,SU,λ));

  • |SU| – the number of elements in the upper shell;

  • Time SU (s) – time to derive the upper shell (in seconds); for Chute2, the running time of the Suboptimal algorithm is given in parentheses.

In bold, we indicate the improvement of a single component of GPsub(R(SL,SU,λ)) when changing the value of parameter γ to a higher value, namely, we consider changes from γ=10 to γ=30 and from γ=30 to γ=50. By underscore, we indicate the deterioration of a single component of GPsub(R(SL,SU,λ)) when changing the value of parameter γ to a higher value, namely, we consider changes from γ=10 to γ=30 and from γ=30 to γ=50.

5.4Experiment 1 – Deriving Interval Representations with the Chute1 Algorithm

In this experiment, we check the behaviour of the Chute1 algorithm.

Table 4

Chute1, vectors of upper bounds for test problem Bi6.1 and γ{10,30,50}.

No.U(SU,λ)
γ=10γ=30γ=50
1120964.00131117.00120466.00131117.00120093.00131117.00
2121666.00131117.00121666.00131117.00121441.00131117.00
3127790.00126078.00127635.00125842.00127646.00125642.00
4125252.00128990.00124924.00128990.00124852.00128990.00
5125502.00128779.00125335.00128665.00125307.00128710.00
Table 5

Chute1, GAPPsub% for test problem Bi6.1 and γ{10,30,50}.

No.GAPPsub%
γ=10γ=30γ=50
15.550.665.160.664.860.66
24.081.234.081.233.901.23
31.642.921.522.741.532.58
42.541.822.281.822.221.82
52.382.052.251.962.232.00
Table 6

Chute1, values of |SU|, and Time SU(s) for test problem Bi6.1 and γ{10,30,50}.

No.|SU|TimeSU(s)
γ=10γ=30γ=50γ=10γ=30γ=50
11132521.908.0011.49
21133542.025.9711.52
3821342.074.269.12
4719312.385.909.50
5719322.085.638.59
Table 7

Chute1, vectors of upper bounds for test problem Bi9.1 and γ{10,30,50}.

No.U(SU,λ)
γ=10γ=30γ=50
1116289.00119365.00116289.00119365.00116193.00119365.00
2117277.00119365.00117050.00119365.00117057.00119365.00
3119379.88118456.00119379.88118456.00119379.88118404.00
4119379.88119365.00119295.00119365.00119329.00119365.00
5119379.88119365.00119379.88119365.00119379.88119365.00

The results for instance Bi6.1 are shown in Tables 46, and the results for instance Bi9.1 are shown in Tables 79. The results for instance Three6.1 are shown in Tables 1012, and the results for instance Three9.1 are shown in Tables 1315.

Table 8

Chute1, GAPPsub% for test problem Bi9.1 and γ{10,30,50}.

No.GAPPsub%
γ=10γ=30γ=50
110.170.7410.170.7410.090.74
28.581.358.401.358.411.35
32.596.382.596.382.596.34
45.513.635.443.635.473.63
55.023.935.023.935.023.93
Table 9

Chute1, values of |SU|, and TimeSU(s) for test problem Bi9.1 and γ{10,30,50}.

No.|SU|TimeSU(s)
γ=10γ=30γ=50γ=10γ=30γ=50
11236593.0611.2220.44
21440673.9110.7017.21
31751843.8213.0521.78
42059995913.7924.32
520601006014.8421.51
Table 10

Chute1, vectors of upper bounds for test problem Three6.1 and γ{10,30,50}.

No.U(SU,λ)
γ=10γ=30γ=50
1128872.00131116.00122914.00128872.00131116.00122471.00128872.00131116.00122387.00
2127143.00127325.00124450.00127069.00127325.00123193.00127105.00127325.00122899.00
3124359.00131116.00131738.00124359.00131116.00131738.00124217.00131116.00131738.00
4125062.00125714.00126639.00124809.00125327.00126342.00124747.00125273.00126099.00
5128872.00120016.00131738.00128872.00120016.00131738.00128872.00119610.00131738.00
Table 11

Chute1, GAPPsub% for test problem Three6.1 and γ{10,30,50}.

No.GAPPsub%
γ=10γ=30γ=50
17.951.9128.457.951.9128.197.951.9128.14
22.352.976.822.292.975.872.322.975.65
327.242.927.8027.242.927.8027.162.927.80
43.263.332.433.063.032.203.012.992.01
510.289.891.7510.289.891.7510.289.591.75
Table 12

Chute1, values |SU|, and TimeSU(s) for test problem Three6.1 and γ{10,30,50}.

No.|SU|TimeSU(s)
γ=10γ=30γ=50γ=10γ=30γ=50
1814332.694.4610.48
21121503.397.8023.98
31121495.3711.8422.38
4712285.469.2824.29
51121513.276.5416.37
Table 13

Chute1, vectors of upper bounds for test problem Three9.1 and γ{10,30,50}.

No.U(SU,λ)
γ=10γ=30γ=50
1119379.88119365.00117516.00119379.8835119365.00117398.00119379.8835119365.00117362.00
2119379.88119365.00118122.00119379.8835119365.00118122.00119379.8835119365.00118122.00
3119379.88119365.00118122.00119379.8835119365.00118122.00119379.8835119365.00118122.00
4119379.88119365.00118122.00119379.8835119365.00118122.00119379.8835119365.00118122.00
5119379.88119365.00118122.00119379.8835119365.00118122.00119379.8835119365.00118122.00
Table 14

Chute1, GAPPsub% for test problem Three9.1 and γ{10,30,50}.

No.GAPPsub%
γ=10γ=30γ=50
16.296.297.006.296.296.916.296.296.88
27.6412.943.067.6412.943.067.6412.943.06
37.404.179.117.404.179.117.404.179.11
411.934.556.1811.934.556.1811.934.556.18
55.75100.004.095.75100.004.095.75100.004.09
Table 15

Chute1, values of |SU|, and TimeSU(s) for test problem Three9.1 and γ{10,30,50}.

No.|SU|TimeSU(s)
γ=10γ=30γ=50γ=10γ=30γ=50
1287913050.09163.04237.71
218538646.24324.61413.15
3256911579.80222.40387.76
4226410536.09104.40175.58
511294911.3850.33101.04

For instance Bi6.1, when changing γ=10 to γ=30, we observe an improvement in at least one component of GPsub(R(SL,SU,λ)) for four vectors λ, although only in two cases (λ Nos. 3 and 5) two components improve. Yet, when changing γ=30 to γ=50, we observe an improvement of GPsub(R(SL,SU,λ)) in at least one of its components for three vectors λ. For λ No. 3, we observe a deterioration of the first component of GPsub(R(SL,SU,λ)), and at the same time, an improvement of the second. For λ No. 5, we observe an improvement of the first component of GPsub(R(SL,SU,λ)), and at the same time, a deterioration of the second.

When changing γ=10 to γ=50, we observe an improvement in at least one component of GPsub(R(SL,SU,λ)) for all vectors λ.

For instance Bi9.1, we observe a similar phenomenon (an improvement, as well as a deterioration), although when changing γ=10 to γ=30, we observe an improvement in only one component of GPsub(R(SL,SU,λ)) for just two vectors λ. When changing γ=10 to γ=50, we observe an improvement in GPsub(R(SL,SU,λ)) for vectors λ Nos. 1–4 (at least one component improves). For λ No. 5, GPsub(R(SL,SU,λ)) remains unchanged.

For instances Bi6.1 and Bi9.1, the higher the value of parameter γ (higher sampling density of the objective space), the more numerous the derived upper shells are. For each vector λ, the time to derive the corresponding upper shell is a small fraction of the assumed time limit TL=1200 seconds.

Let us check the results for tri-criteria instances. For instance Three6.1, when changing γ=10 to γ=30, we observe an improvement of GPsub(R(SL,SU,λ)) in at least one of its components for three vectors λ, although only for one (λ No. 4) all components improve. When changing γ=30 to γ=50, we observe an improvement of GPsub(R(SL,SU,λ)) in at least one of its components for three vectors λ. We also observe a deterioration of the first component of GPsub(R(SL,SU,λ)) for λ No. 2, and, at the same time, an improvement on its third component. When changing γ=10 to γ=50, we observe an improvement in GPsub(R(SL,SU,λ)) for all vectors λ (at least one component improves). For instance Three9.1 and vectors λ Nos. 2–5, upper bounds on all components of f(xPopt(λ)) are equal to the corresponding component of y. Only for λ No. 1, f3(xPopt(λ))<y3, and when changing γ=10 to γ=30, as well as γ=30 to γ=50, there is an improvement only for the third component of GPsub(R(SL,SU,λ)). For instances Three6.1 and Three9.1, the higher the value of parameter γ (higher sampling density of the objective space), the more numerous the derived upper shells are. For instance Three6.1 and instance Three9.1 with γ=10, for most vectors λ, the time to derive the corresponding upper shell is a small fraction of the assumed time limit TL=1200 seconds. However, for instance Three9.1 with γ=30 and γ=50, the time to derive the corresponding upper shell increases significantly compared to γ=10, for all vectors λ.

We checked that for all instances, for no vector λ, the MIP solver derived the optimal solution to problem (2) within time limit TL+TimeSU.

5.5Experiment 2 – Deriving Interval Representations with the Chute2 Algorithm

In this experiment, we check the behaviour of the Chute2 algorithm with N=20 and TS=400. Recall that these parameters are related to the Suboptimal algorithm.

The results for instance Bi6.1 are shown in Tables 1618, and the results for instance Bi9.1 are shown in Tables 1921. The results for instance Three6.1 are shown in Tables 2224, and the results for instance Three9.1 are shown in Tables 2527.

Table 16

Chute2, upper bounds for test problem Bi6.1 and γ{10,30,50}.

No.U(SU,λ)
γ=10γ=30γ=50
1118423.00130703.00116671.00130703.00116230.00130703.00
2119507.00130021.00118226.00129980.00117969.00129946.00
3126391.00123927.00126213.00123235.00126179.00123296.00
4123079.00127228.00122598.00127102.00122657.00127146.00
5123474.00126864.00123273.00126864.00123233.00126766.00
Table 17

Chute2, GAPPsub% for test problem Bi6.1 and γ{10,30,50}.

No.GAPPsub%
γ=10γ=30γ=50
13.520.352.070.351.700.35
22.340.391.280.361.070.34
30.551.230.410.680.390.73
40.820.460.430.370.470.40
50.780.570.620.570.580.49
Table 18

Chute2, |SU|, and TimeSU(s) for test problem Bi6.1 and γ{10,30,50}.

No.|SU|TimeSU(s)
γ=10γ=30γ=50γ=10γ=30γ=50
16162681.16 (78.64)84.79 (77.97)91.44 (79.81)
24913182.12 (180.58)215.77 (211.45)197.74 (192.33)
335837.59 (36.74)41.65 (38.50)42.82 (37.99)
423642.39 (41.04)41.65 (40.42)44.57 (41.78)
525744.07 (43.33)41.28 (39.66)39.82 (37.83)
Table 19

Chute2, upper bounds for test problem Bi9.1 and γ{10,30,50}.

No.U(SU,λ)
γ=10γ=30γ=50
1110646.00119365.00109109.00119365.00109313.00119365.00
2111592.00119218.00111080.00119124.00110985.00119105.00
3118331.00114263.00118172.00113760.00118138.00113655.00
4115241.00116795.00115230.00116781.00115121.00116732.00
5115443.00116518.00115426.00116271.00115321.00116233.00
Table 20

Chute2, GAPPsub% for test problem Bi9.1 and γ{10,30,50}.

No.GAPPsub%
γ=10γ=30γ=50
15.580.744.250.744.430.74
23.921.233.481.153.401.13
31.732.941.592.511.572.42
42.111.512.101.502.011.46
51.781.581.771.381.681.34
Table 21

Chute2, |SU|, and TimeSU(s) for test problem Bi9.1 and γ{10,30,50}.

No.|SU|TimeSU(s)
γ=10γ=30γ=50γ=10γ=30γ=50
1113152439.96 (411.14)515.48 (402.94)665.55 (404.57)
2102744482.23 (407.04)678.38 (402.56)765.19 (409.44)
382032443.50 (406.40)534.97 (410.38)667.23 (404.69)
451523424.95 (403.06)465.13 (400.30)484.92 (402.56)
551320432.90 (400.51)485.67 (404.07)512.03 (403.40)

For instance Bi6.1, when changing γ=10 to γ=30, we observe an improvement in at least one component of GPsub(R(SL,SU,λ)) for all vectors λ, although only in three cases (λ Nos. 3–5) two components improve. Yet, when changing γ=30 to γ=50, we observe an improvement of GPsub(R(SL,SU,λ)) in at least one of its components for three vectors λ (Nos. 1, 2, and 5). For λ No. 3, we observe a deterioration of the second component of GPsub(R(SL,SU,λ)), and, at the same time, an improvement on the first one. All components of GPsub(R(SL,SU,λ)) deteriorate for λ No. 4. When changing γ=10 to γ=50, we observe an improvement in GPsub(R(SL,SU,λ)) for all vectors λ (at least one component improves).

Table 22

Chute2, upper bounds for test problem Three6.1 and γ{10,30,50}.

No.U(SU,λ)
γ=10γ=30γ=50
1128872.00131116.00122259.00128872.00131116.00120988.00128872.00131116.00120434.00
2125477.00125196.00121464.00125483.00125196.00120230.00125477.00125196.00119034.00
3122652.00131116.00131738.00122094.00131116.00131738.00122334.00131116.00131738.00
4122671.00123772.00125154.00122242.00123201.00124765.00122404.00123097.00124677.00
5128872.00119155.00131738.00128872.00118229.00131738.00128872.00117837.00131738.00
Table 23

Chute2, GAPPsub% for test problem Three6.1 and γ{10,30,50}.

No.GAPPsub%
γ=10γ=30γ=50
17.951.9128.077.951.9127.317.951.9126.98
21.051.324.531.061.323.551.051.322.58
326.232.927.8025.892.927.8026.042.927.80
41.371.811.271.031.360.971.161.270.90
510.289.241.7510.288.531.7510.288.231.75
Table 24

Chute2, |SU|, and TimeSU(s) for test problem Three6.1 and γ{10,30,50}.

No.|SU|TimeSU(s)
γ=10γ=30γ=50γ=10γ=30γ=50
182031106.30 (104.24)133.88 (121.32)118.75 (102.00)
241117413.24 (408.25)436.70 (409.23)429.80 (401.94)
310274450.87 (47.43)76.35 (61.08)60.59 (41.18)
43610293.45 (289.56)380.25 (365.56)353.60 (319.88)
511305056.90 (51.22)66.93 (51.58)79.55 (50.46)
Table 25

Chute2, upper bounds for test problem Three9.1 and γ{10,30,50}.

No.U(SU,λ)
γ=10γ=30γ=50
1115765.00115804.00113089.00115250.00115546.00112742.00115126.00115335.00112656.00
2114092.00113121.00118122.00113842.00112477.00117089.00113861.00112778.00118122.00
3116240.00117966.00112426.00116229.00117577.00111977.00116160.00117657.00111578.00
4112893.00116633.00114061.00112291.00116382.00113713.00112177.00116438.00113665.00
5119379.88112286.00118122.00119379.88111440.00118122.00119379.88111242.00118122.00
Table 26

Chute2, GAPPsub% for test problem Three9.1 and γ{10,30,50}.

No.GAPPsub%
γ=10γ=30γ=50
13.363.403.362.933.193.062.823.012.99
23.368.143.063.147.612.213.167.863.06
34.903.034.504.892.714.124.832.783.78
46.872.312.846.372.102.546.272.152.50
55.75100.004.095.75100.004.095.75100.004.09
Table 27

Chute2, |SU|, and TimeSU(s) for test problem Three9.1 and γ{10,30,50}.

No.|SU|TimeSU(s)
γ=10γ=30γ=50γ=10γ=30γ=50
182031519.63 (420.77)548.70 (400.11)654.78 (411.71)
2123255412.71 (403.34)478.80 (409.01)521.99 (401.65)
3123252444.12 (400.92)527.00 (404.25)605.74 (405.75)
492236425.02 (400.11)462.81 (405.86)515.93 (405.72)
551220325.93 (323.31)335.62 (324.23)360.17 (330.95)

For instance Bi9.1, when changing γ=10 to γ=30, we observe an improvement in at least one component of GPsub(R(SL,SU,λ)) for all vectors λ, and for λ Nos. 2–5, both components of GPsub(R(SL,SU,λ)) improve. When changing γ=30 to γ=50, we observe an improvement in both components of GPsub(R(SL,SU,λ)) for all vectors λ but the first one, where the first component of GPsub(R(SL,SU,λ)) deteriorates. When changing γ=10 to γ=50, we observe an improvement in GPsub(R(SL,SU,λ)) for all vectors λ (at least one component improves).

For instances Bi6.1 and Bi9.1, the higher the value of parameter γ (higher sampling density of the objective space), the more numerous the derived upper shells are. For instance Bi6.1, average times over all vectors λ to derive the upper shell for γ=10, γ=30, and γ=50 are, respectively, 77.74, 85.03, and 83.28 seconds. These times are small fractions of the assumed time limit TL=1200 seconds. For instance Bi9.1, average times over all vectors λ to derive the upper shell for γ=10, γ=30, and γ=50 are, respectively, 444.71, 535.92, and 618.98 seconds, and they are not small fractions of TL=1200 seconds.

Let us check the results for tri-criteria instances. For instance Three6.1, when changing γ=10 to γ=30, we observe an improvement of GPsub(R(SL,SU,λ)) in at least one of its components for vectors λ Nos. 1, and 3–5. For λ No. 2, we observe an improvement in the third component of GPsub(R(SL,SU,λ)), as well as a deterioration of the first one. When changing γ=30 to γ=50, we observe an improvement of GPsub(R(SL,SU,λ)) in at least one of its components for vectors λ Nos. 1–3, and 5. For λ No. 4, we observe an improvement in the second and third components of GPsub(R(SL,SU,λ)), as well as a deterioration of the first one. When changing γ=10 to γ=50, we observe an improvement in the GPsub(R(SL,SU,λ)) for all vectors λ (at least one component improves).

For instance Three9.1, when changing γ=10 to γ=30, we observe an improvement in all components of GPsub(R(SL,SU,λ)) for vectors λ Nos. 1–4, and for λ No. 5, GPsub(R(SL,SU,λ)) remains unchanged. When changing γ=30 to γ=50, we observe an improvement in all components of GPsub(R(SL,SU,λ)) for vector λ No. 1. For λ No. 2, we observe a deterioration of all components of GPsub(R(SL,SU,λ)), and for λ Nos. 3–4, we observe an improvement in two components of GPsub(R(SL,SU,λ)), and a deterioration of one of its components. When changing γ=10 to γ=50, we observe an improvement in the GPsub(R(SL,SU,λ)) for all vectors λ (at least one component improves) but the last one.

For instances Three6.1 and Three9.1, the higher the value of parameter γ (higher sampling density of the objective space), the more numerous the derived upper shells are. For instance Three9.1, average times over all vectors λ to derive the upper shell for γ=10, γ=30, and γ=50 are, respectively, 425.48, 470.58, and 531.72 seconds, and they are not small fractions of TL=1200 seconds.

We checked that for all instances, for no vector λ, the MIP solver derived the optimal solution to problem (2) within time limit TL+TimeSU.

5.6Comparing Chute2 with Chute1

When comparing Chute2 to Chute1, for all tested instances and all values of the γ parameter, we observe no deterioration of any component of GPsub(R(SL,SU,λ)). We observe the following.

For instance Bi6.1, for all values of γ, all components of GPsub(R(SL,SU,λ)) improve for all vectors λ.

For instance Bi9.1, for γ=10, all components of GPsub(R(SL,SU,λ)) improve for four vectors λ, and one component improves for one vector λ. The same situation occurs for γ=30 and γ=50.

For instance Three6.1, for γ=10, at least one component of GPsub(R(SL,SU,λ)) improves for all vectors λ, and for two ones all components improve. The same situation occurs for γ=30 and γ=50.

For instance Three9.1, for all γ, at least one component of GPsub(R(SL,SU,λ)) improves for four vectors λ. All components of GPsub(R(SL,SU,λ)) improve for γ=10, γ=30, and γ=50, respectively, for three, four, and three vectors λ. For all values of the γ parameter, for one vector λ there is no improvement of any component of GPsub(R(SL,SU,λ)).

Table 28 shows times of deriving upper shells averaged over all vectors λ for both tested algorithms. We observe a significant increase in these times for Chute2 compared to Chute1. It should be recalled here that Chute2 uses the Suboptimal algorithm, for which the stopping condition depends on the assumed for this algorithm time limit TS=400 seconds. For Chute2, the average running time of the Suboptimal algorithm is given in parentheses. It can be seen that in this case a significant fraction of its running time is that of the Subotimal algorithm. In addition, for all γ values, the average running times of Chute2 are larger for Bi9.1 than for Three9.1, which is theoretically a harder problem to solve because it has one more objective function. The implication is that for Bi9.1, for all five lambda vectors, the Subotimal algorithm terminated due to the TS limit, while for Three9.1 – for four lambda vectors. This affected the average times. For details, see Tables 21 and 27.

Table 28

Average times of deriving upper shells for Chute1 and Chute2.

γAVG TimeSU(s)AVG TimeSU(s)
Chute1Chute2
Bi6.1
102.0977.47 (76.07)
305.9585.03 (81.60)
5010.0483.28 (77.95)
Bi9.1
104.02444.71 (405.63)
3012.72535.92 (404.05)
5021.05618.98 (404.93)
Three6.1
104.04184.15 (180.14)
307.98218.82 (201.75)
5019.50208.46 (183.09)
Three9.1
1044.72425.48 (389.69)
30172.96470.58 (388.69)
50263.05531.72 (391.16)

5.7Discussion

For all test instances of the MOMIP problem, with time limits set, algorithm Chute2 determines tighter upper bounds measured with the help of GPsub(R(SL,SU,λ)) than algorithm Chute1 in most cases. Yet, this comes at the expense of a significant increase in the computation time for deriving upper shells. So, we can observe a trade-off between the quality of the interval representation of the implicit Pareto optimal outcome for a given λ and computation time. In both the algorithms, for a given λ, tightness of upper bounds can be controlled by changing values of parameter γ. However, changing the γ value from lower to higher does not always guarantee an improvement in at least one component of GPsub(R(SL,SU,λ)). It may even happen that some of its components deteriorate. However, in all tested instances, when changing from the lowest to the highest value of parameter γ, no deterioration of any component of GPsub(R(SL,SU,λ)) has been recorded for all vectors λ.

The deterioration of some of the components of GPsub(R(SL,SU,λ)) after increasing γ may be due to the fact that increasing the value of γ does not preserve the elements of the set SU obtained for smaller γ, but generates a new, denser set SU, yet different in general. These new SU elements may not be able to generate always better, but in some cases generate even slightly worse vectors of upper bounds than those obtained for smaller γ. During decision-making, one can store all the derived upper shells and use their elements in the Chute algorithm as helpers to determine tighter upper bounds for a given λ when the DM asks for them.

Parameters affecting the operation of algorithms Chute1 and Chute2 (in particular, time limits for optimization, as well as parameter γ) were arbitrarily set for the numerical experiments conducted on the selected test instances. We can not recommend the adopted parameter values (e.g. TL=1200 seconds, TS=400 seconds) for other instances of the MOMIP problem. The values of these parameters might depend on the problem to be solved, the available computational resources and the conditions of the decision-making process itself.

As Chute1 and Chute2 use a MIP solver as a black box, it is difficult to provide their theoretical performance, especially since they can work with any instance of the MOMIP problem that meets the very generic assumptions made in this work. During their operation, multiple instances of the single-objective MIP problem are solved, which are parameterized by λ in the case of Chute1 and (λ,μ) in the case of Chute2. Moreover, Chute2 uses the Suboptimal algorithm as a black box, and it is difficult to predict which termination condition of Suboptimal will occur as it runs for different instances of the single-objective MIP problem parametrized by λ.

The Chute algorithm returns not only the interval representation but also lower and upper shells. Let us assume that for a given set Λ:={λ1,λ2,λ3} the algorithm derives upper shells SU(λ1), SU(λ2), and SU(λ3). SU:=ˆi=13SU(λi) (where “ˆ” is an operator of adding two sets and removing dominating elements) is an upper shell, and SL:=i=13{INCλi} (where “⊕” is an operator of adding two sets and removing dominated elements) is a lower shell. One can use SL and SU to calculate interval representations of implicit Pareto optimal outcomes designated by λΛ. For test problem Bi6.1 and γ:=50, images of the lower and upper shells obtained this way (for five considered vectors λ) are shown in Fig. 4. These images form a finite two-sided approximation of the Pareto front. The approximation does not fully cover the entire Pareto front, as it was derived to determine interval representations of implicit Pareto optimal outcomes designated by just the selected five vectors λ.

Fig. 4

A finite two-sided approximation of the Pareto front: □, ∘ – images of lower shell SL and upper shell SU elements in the objective space, respectively, ∙ – y.

A finite two-sided approximation of the Pareto front: □, ∘ – images of lower shell SL and upper shell SU elements in the objective space, respectively, ∙ – y∗.

We can say that we obtained the two-sided approximation of the Pareto front, shaped by the DM’s preferences expressed with the help of vectors λ.

Although we aim not to derive approximations of the entire Pareto front (as in multi-objective branch and bound, see, e.g. Przybylski and Gandibleux, 2017; Forget et al., 2022) for k=2,3, with a fairly large set of evenly distributed vectors λ, one would imagine the corridor in which the Pareto front is located.

6Limitations of the Chute Algorithm and Its Possible Enhancements

In this section, we discuss the limitations and possible enhancements of the Chute algorithm to better adapt it to the realities of decision-making and the budgeting of calculations.

In the Chute algorithm, we have assumed that for all probing vectors λ in the FindUpperShell algorithm, the same vector of multipliers μ is used. The Chute1 version inherently uses a single vector μ. Yet, for the Chute2 version, it is just a heuristic assumption that vector μ, set with the help of the Suboptimal algorithm for a given vector λ in line 4 of the Chute algorithm, provides a tight lower bound on values of the objective function of problem (2) for probing vectors λ close to λ. However, this need not be the case, especially for vectors λ, which are significantly different from λ (i.e. when they indicate a significantly different search direction in the objective space).

However, one can imagine version Chute3 of the Chute algorithm in which the determination of vector μ takes place in the FindUpperShell algorithm for each probing λ considered in it (or, e.g. for λ not close, in a sense, to a given λ). This, at the same time, would require adopting a reasonable time limit on optimization in the Suboptimal algorithm, as we expect many probing vectors λ in the FindUpperShell algorithm. This time limit could be, e.g. a fraction of time TS adopted in Chute2. Since the number of probing vectors λ is not a priori known, this time limit would have to be determined by some heuristic rule. It is not desirable that excessive time to determine all vectors μ be a barrier to the applicability of the proposed method.

In a real decision-making process using the Chute algorithm, it is possible to calculate a more adjusted value of parameter γ for a new vector λ based on the properties of the lower and upper shells obtained for previous vectors λ, and with which this algorithm was called. Let us look, for example, at Table 18. For λ1=(0.055,0.945), |SU|=26, TimeSU=91.44 seconds, but for λ5=(0.439,0.561), |SU|=7, TimeSU=39.82 seconds. To have the time of deriving an upper shell for some λ close to λ1 comparable to the time for λ5, it could be possible to lower the value of parameter γ from 50 to, e.g. 20. Such an overarching mechanism (with a set of rules based on statistics collected during the decision-making process) for controlling the behaviour of the Chute algorithm could be useful when computation time is an important factor.

In the proposed method of deriving upper shells (algorithm FindUpperShell), there is no single parameter to limit optimization time for getting theirs. Yet, such a time limit can be incorporated relatively easily as an additional stop condition in FindUpperShell. More generally, it could be even desirable to introduce in the Chute algorithm a time limit for determining the interval representation of the implicit Pareto optimal outcome for a given vector λ. The DM would give, for example, in addition to λ and TL, time limit TI (e.g. TL=1200 seconds, TI=500 seconds). Then the Chute algorithm would have time limit TIk to derive an upper shell for calculating a single component of the interval representation of implicit Pareto optimal outcome designated by λ. Determination of suboptimal vectors μ in Chute2 and Chute3 would, of course, have to be within some fraction of TI.

Based on the above alone, one can imagine many schemes for budgeting calculations, leading to providing interval representations in a decision-making system based on the Chute algorithm.

In our approach, to find elements of the upper shell we solve to optimality the Chebyshev scalarization of the surrogate relaxation of the MOMIP problem. For instances of the MOMIP problem with a large number of constraints (e.g. 1000), even with a suboptimal vector of multipliers μ provided by the Suboptimal algorithm (that is, with a single constraint that mimics the original set of constraints of the MOMIP problem), the FindUpperShell procedure may not derive elements x of the upper shell that fl(x)<yl, l=1,,k. In this case, the upper bounds on components of Pareto optimal outcome designated by λ are not better than components of yl. That is, images of elements of the upper shell in the objective space are very far from the Pareto front of the MOMIP problem, and do not provide better upper bounds than the components of y.

To find (sub)optimal values of multipliers μp, other algorithms can be used (see, e.g. Sikorski, 1986). To find elements of upper shells, sophisticated combined relaxation techniques for MIP problems, e.g. Lagrangean/surrogate heuristics (see Narciso and Lorena, 1999) can also be applied. In the current work, we consider the most general formulation of the MOMIP problem, but to find those elements, problem-specific techniques may help. The disadvantage of the proposed generic scheme Chute is that it does not take into account the specifics of a given instance of the MOMIP problem. However, by showing its Chute2 modification and pointing to the Chute3 option, it has been shown how this scheme can be modified.

Within the generic framework presented, other methods of deriving upper shells in the FindUpperShell procedure can also be applied, e.g. a method shown in Miroforidis (2021).

7Final Remarks

It has been shown how to algorithmically derive lower and upper shells to the MOMIP problem (for any k>1) to get the interval representation of the Pareto optimal outcome designated by vector λ. On selected examples, it has been shown that with the help of the proposed method, one can find such interval representations for randomly selected vectors λ where there is a time limit for a MIP solver on deriving a single Pareto optimal solution.

We conducted some preliminary experiments with the Chute algorithm on instances of the MOMKP with four objective functions. However, due to the mechanism adopted in the FindUpperShell algorithm for changing the probing λ vectors, the results achieved were not satisfactory.

In our future work, we want to improve the method of populating upper shells (in quest of finding their elements that can provide upper bounds) by changing the scheme of probing the objective space. We want it to determine upper shells with the desired properties for four and more objective functions. We also want to apply the presented generic approach to other instances of the MOMIP problem, especially ones connected to real-life problems. This would help verify the practicality of the proposed general method and identify those elements that could be tailored for specific instances of this problem. Possible modifications to the proposed method are indicated in Section 6. These are also worth considering in further work.

Notes

1 The instances can be made available to the reader by e-mail upon request.

Acknowledgements

We thank two anonymous reviewers for their helpful comments.

References

1 

Ahmadi, A., Aghaei, J., Shayanfar, H.A., Rabiee, A. ((2012) ). Mixed integer programming of multiobjective hydro-thermal self scheduling. Applied Soft Computing, 12: , 2137–2146. https://doi.org/10.1016/j.asoc.2012.03.020.

2 

Delorme, X., Battaïa, O., Dolgui, A. ((2014) ). Multi-objective approaches for design of assembly lines. In: Benyoucef, L., Hennet, C., J., Tiwari, M. (Eds.), Applications of Multi-Criteria and Game Theory Approaches: Manufacturing and Logistics. Springer, London, pp. 31–56. https://doi.org/10.1007/978-1-4471-5295-8_2.

3 

Dyer, M.E. ((1980) ). Calculating Surrogate Constraints. Mathematical Programming, 19: , 255–278. https://doi.org/10.1007/BF01581647.

4 

Ehrgott, M. ((2005) ). Multicriteria Optimization. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-27659-9.

5 

Eiselt, H.A., Marianov, V. ((2014) ). A bi-objective model for the location of landfills for municipal solid waste. European Journal of Operational Research, 235: 1, 187–194. https://doi.org/10.1016/j.ejor.2013.10.005.

6 

Forget, N., Gadegaard, S.L., Klamroth, K., Nielsen, L.R., Przybylski, A. ((2022) ). Branch-and-bound and objective branching with three or more objectives. Computers & Operations Research, 148: , 106012. https://doi.org/10.1016/j.cor.2022.106012.

7 

Glover, F. ((1965) ). A multiphase-dual algorithm for the zero-one integer programming problem. Operations Research, 13: (6), 879–919. https://doi.org/10.1287/opre.13.6.879.

8 

Glover, F. ((1968) ). Surrogate constraints. Operations Research, 16: 4, 741–749. https://doi.org/10.1287/opre.16.4.741.

9 

Gurobi (2023). GUROBI. Last accessed March 31, 2023. https://www.gurobi.com/.

10 

IBM (2023). CPLEX. Last accessed March 31, 2023. https://www.ibm.com/products/ilog-cplex-optimization-studio.

11 

Kaliszewski, I. ((2006) ). Soft Computing for Complex Multiple Criteria Decision Making. Springer, New York. https://doi.org/10.1007/0-387-30177-1.

12 

Kaliszewski, I., Miroforidis, J. ((2014) ). Two-sided Pareto front approximations. Journal of Optimization Theory and Its Applications, 162: , 845–855. https://doi.org/10.1007/s10957-013-0498-y.

13 

Kaliszewski, I., Miroforidis, J. ((2019) ). Lower and upper bounds for the general multiobjective optimization problem. AIP Conference Proceedings, 2070: , 020038. https://doi.org/10.1063/1.5090005.

14 

Kaliszewski, I., Miroforidis, J. ((2021) ). Cooperative multiobjective optimization with bounds on objective functions. Journal of Global Optimization, 79: , 369–385. https://doi.org/10.1007/s10898-020-00946-4.

15 

Kaliszewski, I., Miroforidis, J. ((2022) ). Probing the Pareto front of a large-scale multiobjective problem with a MIP solver. Operational Research, 22: , 5617–5673. https://doi.org/10.1007/s12351-022-00708-y.

16 

Kaliszewski, I., Miroforidis, J., Podkopaev, D. ((2016) ). Multiple Criteria Decision Making by Multiobjective Optimization – A Toolbox. Springer Cham. https://doi.org/10.1007/978-3-319-32756-3.

17 

Miettinen, K.M. ((1999) ). Nonlinear Multiobjective Optimization. Kluwer Academic Publishers. https://doi.org/10.1007/978-1-4615-5563-6.

18 

Miroforidis, J. ((2021) ). Bounds on efficient outcomes for large-scale cardinality-constrained Markowitz problems. Journal of Global Optimization, 80: , 617–634. https://doi.org/10.1007/s10898-021-01022-1.

19 

Narciso, M.G., Lorena, L.A.N. ((1999) ). Lagrangean/surrogate relaxation for generalized assignment problems. European Journal of Operational Research, 114: (1), 165–177. https://doi.org/10.1016/S0377-2217(98)00038-1.

20 

Oke, O., Siddiqui, S. ((2015) ). Efficient automated schematic map drawing using multiobjective mixed integer programming. Computers & Operations Research, 61: , 1–17. https://doi.org/10.1016/j.cor.2015.02.010.

21 

Przybylski, A., Gandibleux, X. ((2017) ). Multi-objective branch and bound. European Journal of Operational Research, 260: (3), 856–872. https://doi.org/10.1016/j.ejor.2017.01.032.

22 

Samanlioglu, F. ((2013) ). A multi-objective mathematical model for the industrial hazardous waste location-routing problem. European Journal of Operational Research, 226: (2), 332–340. https://doi.org/10.1016/j.ejor.2012.11.019.

23 

Sikorski, J. ((1986) ). Quasi–Subgradient algorithms for calculating surrogate constraints. In: Malanowski, K., Mizukami, K. (Eds.), Analysis and Algorithms of Optimization Problems, Lecture Notes in Control and Information Sciences, Vol. 82: . Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 202–236. https://doi.org/10.1007/BFb0007163.

24 

Smith, N.A., Tromble, R.W. ((2004) ). Sampling Uniformly from the Unit Simplex. Department of Computer Science, Center for Language and Speech Recognition, Johns Hopkins University.