Multi-objective nonlinear sum of fractional optimization problems with non-convex constraints using duality based branch and bound algorithm

The present paper investigates the solution of multiobjective nonlinear sum of fractional optimization problems. A duality based branch and bound cut method is developed to obtain efficient solution and the methodology is validate by proving the theoretical assertions for the solution. The present m...

Full description

Saved in:
Bibliographic Details
Date:2017
Main Authors: Bhati, D., Singh, P., Бхаті, Д., Сінг, П.
Format: Article
Language:English
Published: Institute of Mathematics, NAS of Ukraine 2017
Online Access:https://umj.imath.kiev.ua/index.php/umj/article/view/1795
Tags: Add Tag
No Tags, Be the first to tag this record!
Journal Title:Ukrains’kyi Matematychnyi Zhurnal
Download file: Pdf

Institution

Ukrains’kyi Matematychnyi Zhurnal
_version_ 1860507657712959488
author Bhati, D.
Singh, P.
Бхаті, Д.
Сінг, П.
author_facet Bhati, D.
Singh, P.
Бхаті, Д.
Сінг, П.
author_sort Bhati, D.
baseUrl_str https://umj.imath.kiev.ua/index.php/umj/oai
collection OJS
datestamp_date 2019-12-05T09:27:02Z
description The present paper investigates the solution of multiobjective nonlinear sum of fractional optimization problems. A duality based branch and bound cut method is developed to obtain efficient solution and the methodology is validate by proving the theoretical assertions for the solution. The present method is the extension of the work P. P. Shen, Y. P. Duan and Y. G. Pei[19] which developed for single objective sum of ratios nonlinear optimization problem. The proposed method is coded in matlab (version 2014b). Two numerical problems are considered for solving by using the proposed method and global optimal solution is obtained.
first_indexed 2026-03-24T02:12:48Z
format Article
fulltext UDC 519.62 D. Bhati, P. Singh (Motilal Nehru Nat. Inst. Technology, Allahabad, India) MULTI-OBJECTIVE NONLINEAR SUM OF FRACTIONAL OPTIMIZATION PROBLEMS WITH NON-CONVEX CONSTRAINTS USING DUALITY BASED BRANCH AND BOUND ALGORITHM* МУЛЬТI-ОБ’ЄКТНА НЕЛIНIЙНА СУМА ДРОБОВИХ ОПТИМIЗАЦIЙНИХ ПРОБЛЕМ З НЕОПУКЛИМИ ОБМЕЖЕННЯМИ З ВИКОРИСТАННЯМ ДУАЛЬНОГО АЛГОРИТМУ ГIЛОК ТА ГРАНИЦЬ The present paper investigates the solution of multiobjective nonlinear sum of fractional optimization problems. A duality based branch and bound cut method is developed to obtain efficient solution and the methodology is validate by proving the theoretical assertions for the solution. The present method is the extension of the work P. P. Shen, Y. P. Duan and Y. G. Pei[19] which developed for single objective sum of ratios nonlinear optimization problem. The proposed method is coded in matlab (version 2014b). Two numerical problems are considered for solving by using the proposed method and global optimal solution is obtained. В роботi вивчається розв’язок мультi-об’єктної нелiнiйної суми дробових оптимiзацiйних проблем. Для ефективного розв’язування таких проблем в роботi розроблено дуальний алгоритм гiлок та границь. Запропонована методологiя обгрунтована доведенням необхiдних теоретичних тверджень. Метод, що застосовано, є узагальненням роботи П. П. Шена, I. П. Дуана та I. Г. Пея (повне посилання) для однооб’єктної нелiнiйної оптимiзацiйної задачi про суми вiдношень. Цей метод реалiзовано в MatLab (версiя 2014b). Двi числовi проблеми розглянуто i розв’язано за допомогою цього методу та отримано їх глобальнi оптимальнi розв’язки. 1. Introduction. In this paper, we consider a multiobjective nonlinear sum of fractional(MONSOF) optimization problem in the following form \mathrm{m}\mathrm{a}\mathrm{x} \biggl\{ \sum p j N1j(x) D1j(x) , \sum p j N2j(x) D2j(x) , . . . , \sum p j Nkj(x) Dkj(x) \biggr\} subject to x \in S = \{ x \in \BbbR n : Ax \leq b, Y\=l(x) \leq 0 x \geq 0\} , p, k \geq 2. (P ) Nij(x) and Dij(x), i = 1, 2, . . . , k, j = 1, 2, . . . , p, are all real-valued nonlinear convex functions over Rn, with Dij > 0 and Y\=l(x) \prime s, \=l = 1, 2, . . . ,m, are all non-convex functions on Rq, S denotes the set of all feasible solutions and A \in Rq\times n, b \in Rq. The sum of fractional optimization problem is one of the most difficult problem in the field of fractional optimization. The sum of fractional optimization arises naturally in decision making when several fractional are to be optimized simultaneously and a compromise is sought which optimizes a weight sum of these ratios. The application of sum of fractional optimization can be described in the situation where a compromise is needed between absolute and relative terms profit and return on investment (or return/risk). Mathis and Mathis [1] identified the application of sum of fractional optimization problem and formulate a fee optimization model. The model is used by hospital admin- istrators in the state of taxes to decide on relative increase of charges for different medical procedure in various department. Since the understanding of sum of fractional multiobjective problem until now is quite limited. This is because of the special structure of objective functions. It is very difficult and * This work is supported financially by DST-SERB, Government of India, through sanctioned order No. SB/EMEQ- 049/2014. c\bigcirc D. BHATI, P. SINGH, 2017 ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 1455 1456 D. BHATI, P. SINGH challenging problem corresponding to general fractional optimization. The studies of sum of fractional optimization problem is restricted to single objective only and it not surprising that theoretical and algorithmic development for this problem is very limited too. However, recently some single objec- tive sum of fractional optimization problem have been made. Benson [3 – 9] proposed various branch and bound techniques for solving the different variants of sum of ratios optimization problems. The main work of developed methods are involvement of a sequence of convex programming problems that differ only their objective function coefficients. These developed methods are applied to solve numerical problems and global optimal solutions are obtained in methods. The convergence studies are discussed for the proposed methods. These method can be used to find global optimal solution for single objective sum of ratio programming problem [10] Shen and Jin [11] proposed a global opti- mization for maximization sum of concave-convex ration within the convex feasible region, they used branch and bound scheme to develop algorithm. Wang and Zhang [12] presented a branch and bound algorithm for globally solving the nonlinear sum of ratios problem on nonconvex feasible region. They claimed that the proposed algorithm is convergent to the global minimum through the refinement of the solution of series of linear programming problems. Shen and Wang [13] proposed a branch and bound algorithm for solving the sum of linear ratios problem. They also proved the proposed algo- rithm is convergent to the global optimal solution by means of subsequent solutions of a series of linear programming problems. Jiao and Shen [14] gave short extension of the work of Wang and Zhang for nonlinear sum of ratio problem. They proposed more general results and used different equivalent problem. Qu, Zhang and Zhao [15] proposed a new branch and bound algorithm based on rectangle partition and the Lagrangian relaxation for solving sum of quadratic ratios problem with non-convex quadratic convergency of the algorithm. Shen, Chen and Yuan [16] proposed a branch reduced-bound algorithm for solving sum of quadratic ratios with non-convex constraints. They modi?ed the problem as monotonic optimization problem and find the globally optimal solution. Sheri. Li and Bai [17] gave a method to solve the problem of minimization of sum of convex-convex ratios problem with a convex feasible region and used branch and bound algorithm to propose new algorithm. They also established the global convergence of the method. Shen and Wang [18] developed an algorithm for sum of general fractional functions using linearization of method and branch-bound method. Shen, Duan and Pei [19] proposed branch and bound type algorithm to solve the sum of convex-convex ratio with non convex constraints. They used branch and bound method and Lagrange duality to develop the proposed scheme. Jaberipour and Khorram [20] developed a harmony search algorithm to find the solution of sum of ratios programming problems. This method is based on probability based search algorithm which was motivated by musical perfognfance. Jin and Hou [21] proposed a branch and bound algo- rithm to find global optimum solution for sum of ratio problem with ratio of the absolute value of affine functions with coefficients. To develop this algorithm they used rectangular partition and used the space of small dimension. Gao and Jin [22] proposed branch and bound algorithm by transformation of sum ratios programming problem into bilinear programming problem. To develop this algorithm, they used linear characteristic of convex envelope and concave envelop of double variables production function. They also used linear relaxation programming of the bilinear programming problem. Freund and Jarre [23] proposed an algorithm for global minimum solution for the sum of quasi convex ratio. They used interior point method to compute global minimum solution. Shaible and Shi [24] gave the ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 MULTI-OBJECTIVE NONLINEAR SUM OF FRACTIONAL OPTIMIZATION PROBLEMS . . . 1457 review published the sum-of-ratios program. They gave survey of applications, theoretical results and various algorithmic approaches for this dif?cult fractional programming. Carlsson and Shi [25] pro- posed a linear relaxation technique to solve sum of linear ratios problem. They concentrated on lower dimensional problem for particular application and used transformation to developed the method. Recently, Ashtiani and Ferreira [26] developed a new branch bound algorithm for sum of ratios programming problem. They used decomposition approach to develop the algorithm. The literature evident that several branch and bound techniques are developed for solving the sum of ratio optimiza- tion problem. But most of the studies are restricted to single objective optimization problems only. In comparison of single objective sum of fractional programming (SOFP), a solution to multi- objective sum of fractional programming (MOSOFP) is a concept which satisfies all the objectives simultaneously. There is no single global solution, and it is often necessary to determine a set of points that all fit a predetermined criteria of optimality. The concept of Pareto optimal solution can be defined as. Definition 1 (Pareto optimal solution (maximization case)). A solution x0 \in S is said to be a Pareto optimal solution for MONSOF optimization problem, if and only if there is no other solution x \in S such that \sum p j Nij(x) Dij(x) \geq \sum p j Nij(x 0) Dij(x0) \forall i = 1, 2, . . . , k and \sum p j Nsj(x) Dsj(x) > \sum p j Nsj(x 0) Dsj(x0) for at least one s. Definition 2 (Pareto optimal Solution (minimization case)). A solution x0 \in S is said to be a Pareto optimal solution for MONSOF optimization problem, if and only if there is no other solution x \in S such that \sum p j Nij(x) Dij(x) \leq p\sum j Nij(x 0) Dij(x0) \forall i = 1, 2, . . . , k and \sum p j Nsj(x) Dsj(x) < \sum p j Nsj(x 0) Dsj(x0) for at least one s. In this paper, we proposed a method for solving multiobjective nonlinear sum of fractional (MONSOF) optimization problems. The organization of the remaining part of the paper is as follows. In Section 2, an equivalent problem is discussed. Next in Section 3, the proposed methodology and theoretical results are developed for the problem. The branch and bound cut method and its convergence is presented in Section 4. In Section 5, numerical experiment are given to demonstrate the proposed method. Finally conclusion are drawn in Section 6. 2. Equivalent and transformation. In order to solve non-convex problem, we can take Nij \leq 0 for all x \in S and therefore the problem (P ) takes the form \mathrm{m}\mathrm{i}\mathrm{n} \biggl\{ \sum p j - N1j(x) D1j(x) , \sum p j - N2j(x) D2j(x) , . . . , \sum p j - Nkj(x) Dkj(x) \biggr\} subject to x \in S = \{ x \in \BbbR n : Ax \leq b, Y\=l(x) \leq 0 x \geq 0\} , p, k \geq 2. (M ) Since Dij are all convex functions on Rn, then Dij are continuous on a compact set S \subseteq Rn for all i = 1, 2, . . . , k and j = 1, 2, . . . , p which means that there exists lij and Lij such that lij = = \mathrm{m}\mathrm{i}\mathrm{n}Dij(x) > 0, Lij = \mathrm{m}\mathrm{a}\mathrm{x}Dij(x) > 0. Thus a set D = \biggl\{ Zij \in Rk\times p \bigm| \bigm| \bigm| \bigm| 0 < 1 Lij \leq Zij \leq 1 lij , i = 1, 2, . . . , k and j = 1, 2, . . . , p \biggr\} \subset Rk\times p + can be constructed. ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 1458 D. BHATI, P. SINGH Now by introducing an additional variable vector Z = (Z1j , Z2j , . . . , Zkj) \in Rk\times p, we use the transformation Zij = 1 Dij to convert the problem (M) into the following equivalent problem (\widehat M) and the equivalence may be established by Theorem 1: \mathrm{m}\mathrm{i}\mathrm{n} \biggl\{ \sum p j=1 - \bigl( N1j(x)Z1j(x) \bigr) , \sum p j=1 - \bigl( N2j(x)Z2j(x) \bigr) , . . . , \sum p j=1 - \bigl( Nkj(x)Zkj(x) \bigr) \biggr\} subject to Y\=l(x) \leq 0 - Zij(x)Dij(x) + 1 \leq 0, Ax - b \leq 0. (\widehat M ) Theorem 1. If x\ast is a Pareto efficient solution for problem (P ), then (x\ast , Z\ast ) is a Pareto efficient solution for the problem \widehat M. If (x\ast , Z\ast ) be a Pareto optimal solution of the problem (\widehat M), then x\ast is a Pareto efficient solution of the problem (P ), where Z\ast ij = 1 Dij(x\ast ) , j = 1, 2, . . . , p and i = 1, 2, . . . , k. Proof. Assume that x\ast is a Pareto efficient solution of the problem (P ). It follows that p\sum j - Nij(x) Dij(x) \leq p\sum j - Nij(x \ast ) Dij(x\ast ) \forall i = 1, 2, . . . , k and p\sum j - Nsj(x) Dsj(x) < p\sum j - Nsj(x \ast ) Dsj(x\ast ) for at least one s, \forall x \in S. where S denote the feasible space of the problem (M). Let us suppose that (x\ast , Z\ast ) is not Pareto efficient solution of the problem (\widehat M). Then there exists some another solution (x\prime , Z \prime ) \in S of the problem (\widehat M) such that p\sum j - Nij(x \prime )Z \prime ij \leq p\sum j - Nij(x \ast )Z\ast ij \forall i = 1, 2, . . . , k, and p\sum j - Nsj(x \prime )Z \prime sj < p\sum j - Nsj(x \ast )Z\ast sj for at least one s, \forall x \in S. Since Z\ast ij = 1 Dij(x\ast ) and Z \prime ij = 1 Dij(x\prime ) , we get p\sum j - Nij(x \prime ) Dij(x\prime ) \leq p\sum j - Nij(x \ast ) Dij(x\ast ) \forall i = 1, 2, . . . , k and p\sum j - Nsj(x \prime ) Dsj(x\prime ) < p\sum j - Nsj(x \ast ) Dsj(x\ast ) for at least one s, \forall x \in S, ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 MULTI-OBJECTIVE NONLINEAR SUM OF FRACTIONAL OPTIMIZATION PROBLEMS . . . 1459 i.e., p\sum j Nij(x \prime ) Dij(x\prime ) \leq p\sum j Nij(x \ast ) Dij(x\ast ) \forall i = 1, 2, . . . , k and p\sum j Nsj(x \prime ) Dsj(x\prime ) > p\sum j Nsj(x \ast ) Dsj(x\ast ) for at least one s, \forall x \in S. This means that x\prime is a Pareto efficient solution of the problem (P ). This is the contradiction of the assumption that (x\ast ) is Pareto efficient solution. Conversely: Suppose (x\ast , Z\ast ) is the Pareto optimal solution of the problem (\widehat M), it follows that p\sum j - Nij(x \ast )Z\ast ij \leq p\sum j - Nij(x)Zij \forall i = 1, 2, . . . , k and p\sum j - Nsj(x \ast )Z\ast sj < p\sum j - Nsj(x)Zsj for at least one s, \forall x \in S. Now let us assume that (x\ast ) is not a Pareto efficient solution of the problem (P ). Hence there exist some another x\prime \in S such that p\sum j Nij(x \prime ) Dij(x\prime ) \geq p\sum j Nij(x \ast ) Dij(x\ast ) \forall i = 1, 2, . . . , k and p\sum j Nsj(x \prime ) Dsj(x\prime ) > p\sum j Nsj(x \ast ) Dsj(x\ast ) for at least one s, \forall x \in S. Since Z \prime ij = 1 Dij(x\prime ) and Z\ast ij = 1 Dij(x\ast ) , where i = 1, 2, . . . , k and j = 1, 2, . . . , p. This implies that p\sum j Nij(x \prime )Z \prime ij \geq p\sum j Nij(x \ast )Z\ast ij \forall i = 1, 2, . . . , k and p\sum j Nsj(x \prime )Z \prime sj > p\sum j Nsj(x \ast )Z\ast sj for at least one s, i.e., p\sum j - Nij(x \prime )Z \prime ij \leq p\sum j - Nij(x \ast )Z\ast ij \forall i = 1, 2, . . . , k and ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 1460 D. BHATI, P. SINGH p\sum j - Nsj(x \prime )Z \prime sj < p\sum j - Nsj(x \ast )Z\ast sj for at least one s. Thus, (x\ast , Z\ast ) is not a Pareto efficient solution of the problem (\widehat M). This is the contradiction of the assumption that (x\ast , Z\ast ) is a Pareto optimal solution of the problem (\widehat M). Theorem 1 is proved. 3. Key algorithmic process. It is established from the Theorem 1 that any technique to find Pareto optimal solution is also applicable for the original problem P. To develop a branch and bound method for MONSOF (P ) optimization problem (P ), we first explain five key process: transform the MONSOF (M) optimization problem into multiobjective nonlinear optimization problem; convert multiobjective optimization problem into single objective optimization problem; successively refine partition of feasible space along longest edge of the constructed simplicial computation of lower and upper bounds for the optimal value of the objective function; finally deleting technique over each subspace generated by the partitions. The partition method is successive triangular partition of the initial simplex S0 with the rule of partition. This means that any infinite nested sequence of partition space constructed through the developed method shrinks to single point. The commonly used partition rule is the bisection. The process to find upper bound for the objective function works in two ways. First, each simplex S constructed through the branching process, the process to find upper bound seeks a upper bound for the maximum of the objective function taken over X \cap S. Second for any step of the method, the upper bound process finds an upper bound for the Pareto optimal solution of the problem (\widehat M). Hence for MONSOF (M) optimization problem. The process to estimate lower bound for the objective function is the consideration of all fea- sible points found in the process of finding upper bounds for the Pareto optimal solution of the problem (\widehat M). Hence for MONSOF (P ) optimization problem. The process of deletion is consisted by deleting each partition of subspace in which there is no solution exists for further processing. In the next subsections, we are giving the detail of processes, respectively. 3.1. Primary simplex and partition. The n-dimensional simplex S0 is constructed on the basis of methodology as developed by Horst and Tuy in [27] that S0 containing S into n-dimensional subsimplices. This process help to find a location of Pareto optimal solution for MONSOF (P ). In the whole process, n-dimensional simplex will be called n-simplex. An initial simplex S0 which tightly enclosed the given feasible space S can be constructed in the following manner: S0 = \Biggl\{ x \in Rn | x\alpha \geq \mho \alpha , \alpha = 1, 2, . . . , n, n\sum \alpha =1 x\alpha \leq \mho \Biggr\} , where \mho = \mathrm{m}\mathrm{a}\mathrm{x} \biggl\{ \sum x\alpha \bigm| \bigm| \bigm| \bigm| x \in \Lambda \subseteq S0 \biggr\} , and \mho \alpha = \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ x\alpha | x \in \Lambda \subseteq S0, \alpha = 1, 2, . . . , n, the ver- tex set S0 is \widetilde V0, \widetilde V1, . . . , \widetilde Vn \bigr\} , where \widetilde V0 = (\mho 1,\mho 2, . . . ,\mho n) and \widetilde V\alpha = (\mho 1, . . . ,\mho \alpha - 1, \gamma \alpha ,\mho \alpha +1, . . . . . . ,\mho n) and here \gamma \alpha = \mho - \sum \alpha \not =\=\alpha \mho \=\alpha . Further, the subdivision of simplices can be performed in the following manner: Using the technique of branch and bound method, the constructed n-simplex S0 may be split into two parts. These parts are known as simplices of S. Suppose a subsimplex S0 having the vertex\bigl\{ \widetilde V0, \widetilde V1, . . . , \widetilde Vn \bigr\} is required to be split into two parts and let c be any mid point of the longest ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 MULTI-OBJECTIVE NONLINEAR SUM OF FRACTIONAL OPTIMIZATION PROBLEMS . . . 1461 edge \bigl[ \widetilde Vs, \widetilde V\=s \bigr] of S0. This means that \bigl[ \widetilde Vs, \widetilde V\=s \bigr] \bigm\| \bigm\| \widetilde Vs - \widetilde V\=s \bigm\| \bigm\| = \mathrm{m}\mathrm{a}\mathrm{x}\=\theta <\theta \bigm\| \bigm\| \widetilde V\=\theta - \widetilde V\theta \bigm\| \bigm\| , where \| \cdot \| is any prescribed norm in Rn and \=\theta , \theta = 0, 1, 2, . . . , n. Then S1 and S2 are known as simplicial bisection of S. Thus the vertex set of S1 can be represented as \widetilde V0, \widetilde V1, . . . , \widetilde Vs - 1, c, \widetilde Vs+1, . . . , \widetilde Vn and the vertex set of S2 is \widetilde V0, \widetilde V1, . . . , \widetilde V\=s - 1, c, \widetilde V\=s+1, . . . , \widetilde Vn. In the process of branch and bound, if S\=\alpha is a nested subsequence of simplices S\=\alpha +1 \subset S\=\alpha , \forall \=\alpha , then for some unique point x \in Rn, we get \cap \=\alpha S\=\alpha = x. 3.2. Lower bound. For all simplex S\=\alpha , (S\=\alpha \subseteq S0) by the branching method, the lower bound process is used to compute a lower bound lb(S) for the optimal value of \eta (S) for the problem \eta (S) = \left\{ \mathrm{m}\mathrm{i}\mathrm{n} \sum p j=1 - Zij(x)Nij(x), i = 1, 2, . . . , k, subject to Y\=l(x) \leq 0, \=l = 1, 2, . . . ,m, - ZijDij(x) + 1 \leq 0, i = 1, 2, . . . , k and j = 1, 2, . . . , p, Ax - b \leq 0 \forall x \in S and Z \in D. Furthermore, the weighted sum method converts the multiobjective fractional programming into the following single objective function nonlinear programming: h(x)\eta (S) = \left\{ \mathrm{m}\mathrm{i}\mathrm{n} \sum k i=1 \sum p j=1 - wiZij(x)Nij(x) subject to Y\=l(x) \leq 0, \=l = 1, 2, . . . ,m, - ZijDij(x) + 1 \leq 0, i = 1, 2, . . . , k and j = 1, 2, . . . , p, Ax - b \leq 0 \forall x \in S and Z \in D, where \bigl\{ - N ij(x) \bigr\} \in Rk\times p, \bigl\{ Zij(x) \bigr\} \in Rk\times p, and \bigl\{ Dij(x) \bigr\} \in Rk\times p and the weights [wi] \in (0, 1) such that \sum k i wi = 1. Theorem 2. Suppose the feasible space S of MONSOF optimization problem is a subsimplex of S0 obtained by the branch and bound technique having the vertices \widetilde V0,\widetilde V1, . . . ,\widetilde Vn. Then lb(S) \leq \eta (S), where lb(S) is the optimal value of the linear programming problem in variable \beta ij , i = = 1, 2, . . . k, j = 1, 2, . . . , p, t and \lambda \=l, \=l = 1, 2, . . . ,m, and ul, l = 1, 2, . . . , q, given by lb(S) = \mathrm{m}\mathrm{a}\mathrm{x} k\sum i=1 p\sum j=1 \beta ij + t subject to m\sum \=l=1 \lambda \=lY\=l( \widetilde Vr) + q\sum l=1 ul(Al \widetilde Vr - bl) - t \geq 0, r = 0, 1, . . . , n, - wiNij(\widetilde Vr) - \beta ijDij(\widetilde Vr) \geq 0, i = 1, 2, . . . , k, j = 1, 2, . . . , p, \beta \geq 0, \lambda \geq 0, u \geq 0, t — free, (1) where Al denotes the lth row of Al, bl denotes the components of b, l = 1, . . . , q. ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 1462 D. BHATI, P. SINGH Proof. According to the definition of \eta (S) and the Lagrangian weak duality theorem of nonlinear programming, we have \eta (S) \geq lb(S), lb(S) = \mathrm{m}\mathrm{a}\mathrm{x} \beta \geq 0,u\geq 0 \left[ \left( \mathrm{m}\mathrm{i}\mathrm{n} x\in S,Z\in D k\sum i=1 p\sum j=1 - wiZijNij(x) + m\sum \=l=1 \lambda \=lY\=l(x) + + k\sum i=1 p\sum j=1 \beta ij( - ZijDij(x) + 1) + q\sum l=1 ul(Alx - bl) \right) \right] , lb(S) = \mathrm{m}\mathrm{a}\mathrm{x} \beta \geq 0,u\geq 0 \left[ k\sum i=1 p\sum j=1 \beta ij + \mathrm{m}\mathrm{i}\mathrm{n} x\in S,z\in D \left[ k\sum i=1 p\sum j=1 - wiZijNij(x) - - \beta ijDijZij(x) + m\sum \=l=1 \lambda \=lY\=l(x) + q\sum l=1 ul(Alx - bl) \right] \right] = = \mathrm{m}\mathrm{a}\mathrm{x} \beta \geq 0,u\geq 0 \left[ k\sum i=1 p\sum j=1 \beta ij + \mathrm{m}\mathrm{i}\mathrm{n} x\in S,Z\in D \Biggl[ \bigl( \langle W TA,BT \rangle + \langle \beta T I,BT \rangle \bigr) + + m\sum \=l=1 \lambda \=lY\=l(x) + q\sum l=1 ul(Alx - bl) \Biggr] \right] = = \mathrm{m}\mathrm{a}\mathrm{x} \beta \geq 0,u\geq 0 \left[ k\sum i=1 p\sum j=1 \beta ij + \mathrm{m}\mathrm{i}\mathrm{n} x\in S,Z\in D \Biggl[ \bigl( \bigl\langle W TA+ \beta T I,BT \bigr\rangle \bigr) + + m\sum \=l=1 \lambda \=lY\=l(x) + q\sum l=1 ul(Alx - bl) \Biggr] \right] , where A(x) = \bigl\{ - N ij(x) \bigr\} \in Rk\times p, B(x) = \bigl\{ Zij(x) \bigr\} \in Rk\times p, I(x) = \bigl\{ Dij(x) \bigr\} \in Rk\times p and W = [wij ] \in \bfR k\times p can be taken as wij = \left\{ 0, if i \not = j, wi(x), if i = j. Since \mathrm{m}\mathrm{i}\mathrm{n} z\in D \Bigl[ \langle W TA+ \beta T I,BT \rangle \Bigr] = \left\{ 0, if W TA+ \beta T I \geq 0 \forall x \in S, - \infty , e.w. It follows that lb(S) = \mathrm{m}\mathrm{a}\mathrm{x} \beta \geq 0,u\geq 0 \left[ k\sum i=1 p\sum j=1 \beta ij +\mathrm{m}\mathrm{i}\mathrm{n} x\in \beta \left( m\sum \=l=1 \lambda \=lY\=l + q\sum l=1 ul(Alx - bl) \right) \right] subject to W TA+ \beta T I \geq 0 \forall x \in S, \beta \geq 0, \lambda \geq 0, u \geq 0. (2) ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 MULTI-OBJECTIVE NONLINEAR SUM OF FRACTIONAL OPTIMIZATION PROBLEMS . . . 1463 For each j = 1, 2, . . . , p and i = 1, 2, . . . , k, from the problem (2), we get - wiNij - \beta ijDij \geq 0 for all x \in S. Since Nij and Dij are convex functions and for \beta ij \geq 0, then - wiNij - \beta ijDij is convex function for all j = 1, 2, . . . , p and i = 1, 2, . . . , k. As we know S is simplex with extreme points\widetilde V0,\widetilde V1, . . . ,\widetilde Vn, we have problem (2) hold if and only if - wiNij - \beta ijDij \geq 0, where r = 0, 1, . . . , n. This implies that lower bound lb(S) is given by lb(S) = \mathrm{m}\mathrm{a}\mathrm{x} k\sum i=1 p\sum j=1 \beta ij + t subject to t \leq m\sum \=l=1 \lambda \=lY\=l(x) + q\sum l=1 ul(Alx - bl) \forall x \in S, - wiNij(\widetilde Vr) - \beta ijDij(\widetilde Vr) \geq 0 \forall \beta \geq 0, \lambda \geq 0, u \geq 0, j = 1, 2, . . . , p, r = 0, 1, . . . , n, i = 1, 2, . . . , k. (3) For each \lambda \geq 0 and u \geq 0 and since Y\=l(x) and (Alx - bl) are non-convex functions for each \=l = 1, 2, . . . ,m and l = 1, 2, . . . , q we can get \sum m \=l=1 \lambda \=lY\=l(x) + q\sum l=1 ul(Alx - bl) is a non-convex function of x. Additionally, since S is a simplex with vertices \widetilde V0,\widetilde V1, . . . ,\widetilde Vn. It can be seen that for each \widetilde Vr \geq 0 hold if and only if m\sum \=l=1 \lambda \=lY\=l( \widetilde Vr) + q\sum l=1 ul(Al \widetilde Vr - bl) - t \geq 0, r = 0, 1, . . . , n. Theorem 2 is proved. Theorem 3. Let S1 and S2 be subsimplex of S0 formed by the branching process such that S2 \supseteq S1 \supseteq S0. Then (i) lb(S2) \geq lb(S1), (ii) lb(S1) > - \infty . Proof. The first part of the theorem can be proved directly from the definition of lb(S) given in the proof of Theorem 2 for an arbitrary simplex S. To prove the (ii), using the part (i), we require only to show that lb(S0) > - \infty . Now from the proof of Theorem 2, we have lb(S0) = \mathrm{m}\mathrm{a}\mathrm{x} \beta \geq 0,u\geq 0 \left[ \left( \mathrm{m}\mathrm{i}\mathrm{n} x\in S,z\in D k\sum i=1 p\sum j=1 - wiZijNij(x) + + k\sum i=1 p\sum j=1 \beta ij( - ZijDij(x) + 1) + m\sum \=l=1 \lambda \=lY\=l(x) + q\sum l=1 ul(Alx - bl) \right) \right] . ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 1464 D. BHATI, P. SINGH Assuming \beta = 0, \lambda = 0, u = 0, then lb(S0) \geq \mathrm{m}\mathrm{i}\mathrm{n} x\in S,z\in D k\sum i=1 p\sum j=1 - wiZijNij(x). For wi > 0, and since Nij(x) is convex on \bfR n, then Nij(x) is continuous for each j = 1, 2, . . . , p, i = 1, 2, . . . k, and hance the function F (x, z) = \sum k i=1 \sum p j=1 - wiZijNij(x) is a continuous function of (x, Z) on \Psi = S0 \times D. By the compactness of \Psi , it mean that \mathrm{m}\mathrm{i}\mathrm{n}x\in S,Z\in D F (x, Z) is finite, therefore lb(S0) > - \infty . Theorem 3 is proved. 3.3. Upper bound. This section describes the process to determine the upper bound for the Pareto optimal solution of the problem M for each simplex S generated by the process such that lb(S) is finite. To find upper bound in each iteration in the algorithmic process, check all the feasible solutions contained in S and we get more feasible solution and the value of upper bound may be improved iteratively. Theorem 4. Suppose S be a subsimplex of S0 with vertices \widetilde V0,\widetilde V1, . . . ,\widetilde Vn and suppose that lb(S) \not = +\infty . Let (Z0, Z1, Z2, . . . , Zn) be optimal dual variable corresponding to the first (n+ 1) constraints of linear program (1) and let \omega = \sum n r=0 Zr\widetilde Vr. If Y\=l(\omega ) \leq 0 for each \=l = 1, 2, . . . ,m, then \omega is a feasible solution of the problem M. Proof. The dual linear program of problem (1) is DLP (S) lb(S) = \mathrm{m}\mathrm{i}\mathrm{n} P\sum j=1 k\sum i=1 n\sum r=0 - Zr ijNij(\widetilde Vr) subject to n\sum r=0 Zr = 1, (4) n\sum r=0 - ZrY\=l( \widetilde Vr) \geq 0, n\sum r=0 Zr ijDj(\widetilde Vr) \geq 1, j = 1, 2, . . . , p, n\sum r=0 Zr(bl - Al \widetilde Vr) \geq 0, l = 1, 2, . . . , q, (5) Zr ij \geq 0, i = 1, . . . , p and r = 0, 1, . . . , n. From constraint (5) of the problem DLP (S), we have n\sum r=0 Zr(bl - Al \widetilde Vr) \geq 0, l = 1, 2, . . . , q. Using the constraint (4) of the problem DLP (S), we get ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 MULTI-OBJECTIVE NONLINEAR SUM OF FRACTIONAL OPTIMIZATION PROBLEMS . . . 1465 Al n\sum r=0 Zr\widetilde Vr \leq n\sum r=0 Zrbl = bl, l = 1, 2, . . . , q. This implies that Al\omega \leq bl, l = 1, 2, . . . , q. Therefore, Y\=l(\omega ) \leq 0, \=l = 1, 2, . . . ,m, then \omega is a feasible solution of the problem (M). 3.4. Process of deletion. In the branch and bound search proceeds, finite simplices are generated by the developed algorithm. Among them, certain simplices are eliminated from further consideration. At the beginning of each steps, simplex S\=\alpha is created by simplicial bisection in the branching process and it is subjected to the detection by infeasibility test. Suppose \widetilde V0,\widetilde V1, . . . ,\widetilde Vn denote the vertices of such a simplex S. If for any l \in 1, 2, . . . , q such that \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ Al \widetilde V0 - bl, Al \widetilde V1 - bl, . . . , Al \widetilde Vn - bl > 0 \bigr\} or, for some \=l \in 1, 2, . . . ,m, such that \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ Y\=l( \widetilde V0), Y\=l( \widetilde V1), . . . , Y\=l( \widetilde Vn) > 0 \bigr\} , then the simplex S is said to pass the deletion by infeasibility test. 4. Proposed method and its convergence. In this section, based upon the results and branch and bound process discussed in Section 3, the basic outlines of the developed branch and bound method are given to solve MONSOF optimization problem (P ). The method is coded in Matlab (Ver. 2014(b)) and run on computer machine 64 bit, RAM 4GB and the whole process is summarized in the following manner. Step 1. Initial Setting: (i) First we choose a suitable weight vector w = (w1, w2, . . . , wk) where, each wi \in (0, 1) such that \sum k i=1 wi = 1 and the MONSOF optimization problem (\eta (S)) converts into single objective (h(x) \eta (S)). (ii) Solve the linear programming problem (1) for the initial simplex S0 to find the optimal value lb(S0) and find the feasible set F (S0). (iii) Setting, \mu 0 = lb(S0). If F (S0) \not = \phi , then compute \nu 0 = \mathrm{m}\mathrm{i}\mathrm{n}\{ h(x) : x \in F (S0)\} and choosing x0 \in F (S0) such that h(x0) = \nu 0 otherwise, setting \nu 0 = +\infty and setting G0 = \{ S0\} , \sigma = 1 (iteration counter). Step 2. Stopping criteria: If \nu \sigma = \mu \sigma , then STOP and x\sigma is a Pareto efficient solution for problem (P ). Step 3. (Iterations:) Setting the iteration counter, \sigma \geq 1 and execute the process from (a) to (h). (a) Set x\sigma = x\sigma - 1, \mu \sigma = \mu \sigma - 1, \nu \sigma = \nu \sigma - 1 if S\sigma - 1 exist, set S\sigma = S\sigma - 1. If \nu \sigma = \mu \sigma , then STOP and x\sigma is a Pareto efficient solution for problem (P ) otherwise continue as follows. (b) Divide S\sigma into two parts S\sigma 1 and S\sigma 2 using the bisection along the longest edge of simplicial. Suppose T \prime = \{ S\sigma 1 , S \sigma 2 \} . (c) Test the infeasibility conditions in deletion technique for each simplex of T \prime and delete the each simplex which does not pass the infeasibility test of deletion technique. Now we obtained T which represent the subset of T \prime . (d) For each S\sigma \=\alpha \subset T, execute the process from (d1) to (d3). (d1) find the optimal value lb(S) of the linear programming problem (1), ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 1466 D. BHATI, P. SINGH (d2) determine the set F (S), of feasible solutions contained in S. If F (S) \not = \phi then compute h(\widehat x) = \mathrm{m}\mathrm{i}\mathrm{n}\{ h(x) : x \in F (S)\} and choose \widehat x \in F (S) otherwise, set h(\widehat x) = +\infty , (d3) if h(\widehat x) < h(x\sigma ), set x\sigma = \widehat x and set \nu \sigma = h(\widehat x). (e) Set G\sigma = \{ G\sigma - 1 \setminus S\sigma \} \cup T. (f) Delete from G\sigma all S \in G\sigma such that lb(S) > \nu \sigma . (g) If G\sigma \not = \infty , then set \mu \sigma = \mathrm{m}\mathrm{i}\mathrm{n}\{ lb(S) : S \in G\sigma \} and choose S\sigma \in G\sigma such that lb(S\sigma ) = \mu \sigma otherwise set \mu \sigma = \nu \sigma . (h) Set \sigma = \sigma + 1 and go to iteration \sigma . 4.1. Convergence. The convergence of the propose method depends upon the quality of \nu \sigma . Therefore, the method may be benefited from a fast local search. This method either finite or infinite. If it is finite, then it terminates a iteration \sigma . The feasible point x\sigma is a Pareto optimal solution of original problem P. If the method does not terminates at iteration \sigma , then it is easy to show that it generates at least one infinite nested subsequence \{ S\=\alpha \} of simplices S\=\alpha +1 \subseteq S\=\alpha for all \=\alpha . In this situation, following result is a key to the convergence of the algorithm for the sake of convenience. Suppose, F (x, Z) : S0 \times D \rightarrow \bfR be the objective function of problem (\widehat M) with F (x, Z) = \sum k i=1 \sum p j=1 - wijZijNij(x) and let H(x, Z) : S0 \times D \rightarrow \bfR k\times p \times \bfR q be a vector function formed by the constraints of (\widehat M) with H(x, Z) = (H1(x, Z), . . . ,Hk\times p+q+m(x, Z)) \in \in Rk\times p \times Rq \times Rm where each components of H(x, Z) is given by H\=k(x, Z) = \left\{ Y\=k(x), if \=k = 1, 2, . . . ,m, - Z\=k.D\=k(x) + 1, if \=k = m+ i, j, where i = 1, 2, . . . k, j = 1, 2, . . . , p, A\=k - i,jx - b\=k - i,j , if \=k = i, j + 1, . . . ,m+ i, j + q. Theorem 5. Suppose that the proposed method is infinite, and assume \{ S\=\alpha \} be an infinite nested subsequence of simplices generated by the method. Let \cap \=\alpha S\=\alpha = \{ x\ast \} . Then \{ x\ast \} is an optimal solution of the problem (M). Hence the solution \{ x\ast \} is a Pareto efficient solution of the problem (P ). Proof. Assume that the method is infinite and the simplicial \{ S\=\alpha \} be chosen in the theorem. So, from R. Host and H. Tuy [27], \cap \=\alpha S\=\alpha = \{ x\ast \} for some point x\ast \in \bfR n. Then x\ast \in \bfS 0 since for all \=\alpha , S\=\alpha \subseteq S0. By the definition of F (x, Z) and H(x, Z), we can rewrite the problem (\widehat M) as follows: \mathrm{m}\mathrm{i}\mathrm{n} \{ F (x, Z)H(x, Z) \leq 0, x \in S0, Z \in D \bigr\} from Theorem 3, the sequence lb(S\=\alpha ) is nondecreasing, therefore, the limit u\ast = \mathrm{l}\mathrm{i}\mathrm{m}\=\alpha \rightarrow \infty lb(S\=\alpha ) exist. Suppose \eta (S0) = \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ F (x, Z) : H(x, Z) \leq 0, x \in S0, Z \in D \bigr\} then, it is obviously that, u\ast \leq \eta (S0). (6) In the next, we can show that u\ast \geq \eta (S0). (7) ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 MULTI-OBJECTIVE NONLINEAR SUM OF FRACTIONAL OPTIMIZATION PROBLEMS . . . 1467 It is noticed that \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ F (x, Z) : H(x, Z) \leq 0, Z \in D \bigr\} \geq \eta (S0). (8) Therefore, we only require to prove u\ast \geq \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ F (x, Z) : H(x, Z) \leq 0, Z \in D \bigr\} . (9) Contrary, suppose that (6) does not holds. This means that \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ F (x, Z) : H(x, Z) \leq 0, Z \in D \bigr\} > u\ast . (10) From the problem, we know that fractional functions Nij(x) and Dij(x) are convex function on \bfR n for each i = 1, 2, . . . , k, j = 1, 2, . . . , p. The nonlinear constraint’s Y\=l(x) are the concave functions on Rn for all \=l = 1, 2, . . . ,m. This implies that Nij(x), Dij(x) and Y\=l(x) are continuous function. Therefore, the functions F (x, Z) and H\=k(x, Z), where, \=k = 1, 2, . . . , k\times p+ q+m) are continuous on \Psi = S0 \times D and linear in Z for every fixed value of x. Therefore, for the fixed value of x \in S0, the function (Z, \=\beta ) \rightarrow F (x, Z) + \langle \=\beta ,H(x, Z)\rangle is linear in Z and \=\beta respectively, where \=\beta = \bigl( \beta 1j , \beta 2j , . . . , \beta ij , \lambda 1, \lambda 2, . . . , \lambda m, u1, u2, . . . , uq \bigr) , \mathrm{m}\mathrm{i}\mathrm{n} Z\in D \mathrm{m}\mathrm{a}\mathrm{x} \=\beta \in \bfR k\times p + \times \bfR q +\times \bfR m + \bigl\{ F (x, Z) + \langle \=\beta ,H(x, Z)\rangle \bigr\} = = \mathrm{m}\mathrm{a}\mathrm{x} \=\beta \in \bfR k\times p + \times \bfR q +\times \bfR m + \mathrm{m}\mathrm{i}\mathrm{n} Z\in D \rightarrow \bigl\{ F (x, Z) + \langle \=\beta ,H(x, Z)\rangle \bigr\} . (11) Form the above, we can write clearly \mathrm{m}\mathrm{a}\mathrm{x} \=\beta \in \bfR k\times p + \times \bfR q +\times \bfR m + \bigl\{ F (x, Z) + \langle \=\beta ,H(x, Z)\rangle \bigr\} = \left\{ F (x, Z), if H(x, Z) \leq 0, +\infty , otherwise. From (10), for every x \in S0, we get the following equation: \mathrm{m}\mathrm{a}\mathrm{x} \bigl\{ F (x, Z) + \langle \=\beta ,H(x, Z)\rangle \leq 0, Z \in D \bigr\} = = \mathrm{m}\mathrm{a}\mathrm{x} \=\beta \in \bfR k\times p + \times \bfR q +\times \bfR m + \mathrm{m}\mathrm{i}\mathrm{n} Z\in D \bigl\{ F (x, Z) + \langle \=\beta ,H(x, Z)\rangle \bigr\} . (12) Using the equation (9), we can write \mathrm{m}\mathrm{a}\mathrm{x} \=\beta \in \bfR k\times p + \times \bfR q +\times \bfR m + \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ F (x\ast , Z) + \langle \=\beta ,H(x\ast , Z)\rangle \bigr\} > u\ast so there exist \=\beta \ast satisfying \mathrm{m}\mathrm{i}\mathrm{n} Z\in D \bigl\{ F (x\ast , Z) + \langle \=\beta ,H(x\ast , Z)\rangle \bigr\} > u\ast using the continuity of function (Z, \=\beta \ast ) \rightarrow \bigl\{ F (x, Z)+ \langle \=\beta \ast , H(x\ast , Z)\rangle \bigr\} , we can then find, for every fixed Z \in D an open ball UZ in \bfR n around x\ast and an open ball VZ in \bfR p around Z such that F (\widehat x, \widehat Z) + \bigl\{ \bigl\langle \=\beta \ast , H(\widehat x, \widehat Z) \bigr\rangle \bigr\} > u\ast \forall \widehat x \in UZ , \widehat Z \in VZ . ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 1468 D. BHATI, P. SINGH Since the ball VZ , Z \in D, form a covering of the compact set D. There is a finite set E \subseteq D such that the balls VZ , Z \in E, still form a covering of D. Let U \subseteq Z\in E UZ , then for every Z \in D, we have Z \in V \widehat Z for some \widehat Z \in E. Therefore, F (x\ast , Z) + \bigl\{ \bigl\langle \=\beta ,H(x\ast , Z) \bigr\rangle \bigr\} > u\ast \forall \widehat x \in UZ , \widehat Z \in VZ . But S\=\alpha \subseteq U for all sufficient large \=\alpha , because \cap \=\alpha S\=\alpha = \{ x\ast \} . Then \mathrm{m}\mathrm{a}\mathrm{x} \=\beta \in \bfR k\times p + \times \bfR q +\times \bfR m + \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ F (x, Z) + \bigl\langle \=\beta ,H(x, Z) \bigr\rangle : x \in S\=\alpha , Z \in D \bigr\} > u\ast . Hence lb(S\=\alpha ) > u\ast . This is a contradiction of our assumption. Therefore we can obtain 7 combining (6) – (9), we can obtain u\ast = V (S0) = \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ F (x\ast , z\ast ) : H(x\ast , Z) \leq 0, Z \in D \bigr\} . This implies that \mathrm{m}\mathrm{i}\mathrm{n} \bigl\{ F (x\ast , Z) : H(x\ast , Z) \leq 0, Z \in D \bigr\} = \mathrm{m}\mathrm{i}\mathrm{n}\{ F (x, Z) : H(x, Z) \leq 0, x \in S0, Z \in D\} . An optimal solution Z\ast of the problem (13) is also an optimal solution (x\ast , Z\ast ) for problem (\widehat M). Therefore, by Theorem 1, x\ast is a Pareto optimal solution of the problem (P ). 5. Computational experiments. In this section, we are given two numerical problem for the demonstration of proposed methods. These problems are made by combining the objective functions of the numerical problems of Shen, Duan and Pei [19] in the sense of multiobjective optimization problem. 5.1. Numerical problem 1. Consider a multiobjective nonlinear sum of fractional optimization problem Max \biggl( F1 = - 2x1 - x2 x1 + 10 + - 2 x2 + 10 , F2 = x21 - 3x1 + x22 - 3x2 - 3.5 x1 + 1 + - x2 x21 - 2x1 + x22 - 8x2 + 20 \biggr) subject to - x21 - x22 + 3 \leq 0, - x21 - x22 + 8x2 - 14 \leq 0, 2x1 + x2 \leq 6, 3x1 + x2 \leq 8, x1 - x2 \leq 1, x1, x2 \geq 1. Using the method proposed in Section 4, first we find the simplicial S0. The vertices of S0 are obtained as (1.0000, 1.0000), (4.0000, 1.0000), (1.0000, 4.0000). Choosing the suitable weights for each objective functions and converts MONSOF optimization problem into single objective optimization problem. After applying the proposed method we achieve the efficient solution which is given in Table 1. By examining the table 1, the efficient solution for the problem is x1 = 1.8295, x2 = 1.8295 and the value of the objective functions at obtained points are F1 = - 0.6329, F2 = - 2.9683. ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 MULTI-OBJECTIVE NONLINEAR SUM OF FRACTIONAL OPTIMIZATION PROBLEMS . . . 1469 Table 1. Solution of MONSOF optimization Problem 1 using proposed method S. No. w1 w2 x1 x2 F1 F2 No. of Iteration CPU Time Sec. 1 0.10 0.90 1.5147 1.5147 - 0.5682 - 3.3414 31 1.632 2 0.20 0.80 1.5330 1.5330 - 0.5721 - 3.3210 32 1.664 3 0.30 0.70 1.5574 1.5574 - 0.5772 - 3.2933 30 1.668 4 0.40 0.60 1.5899 1.5899 - 0.5830 - 3.2562 30 1.593 5 0.50 0.50 1.6363 1.6363 - 0.5936 - 3.2019 30 1.640 6 0.60 0.40 1.6221 1.6221 - 0.5907 - 3.2164 30 1.647 7 \bfzero .\bfseven \bfzero \bfzero .\bfthree \bfzero \bfone .\bfeight \bftwo \bfnine \bffive \bfone .\bfeight \bftwo \bfnine \bffive - \bfzero .\bfsix \bfthree \bftwo \bfnine - \bftwo .\bfnine \bfsix \bfeight \bfthree \bftwo \bfeight \bfone .\bfsix \bfzero \bfthree 8 0.80 0.20 1.8925 1.8295 - 0.6329 - 2.9683 28 1.601 9 0.90 0.10 2.1685 1.4945 - 0.6531 - 2.9741 39 2.011 5.2. Numerical problem 2. Consider a multiobjective nonlinear sum of ratios programming problem as: Max \biggl( f11 g11 + f12 g12 , f21 g21 + f22 g22 + f23 g23 \biggr) subject to x1 + x2 + x3 + x4 \leq 24, 6 \leq x1 \leq 10, 4 \leq x2 \leq 6, 8 \leq x3 \leq 12, 6 \leq x4 \leq 8, where f11 = x21 - 4x1 + 2x22 - 8x2 + 3x33 - 12x3 + 4x24 - 16x4 - 65, f12 = 2x21 - 16x1 + x22 - 8x2 + 0.5x23 - 4x3 + 0.25x4 - 2x4 - 15, g11 = x21 - 2x1 + x22 - 2x2 + x23 - 3x3 + x4 + 28, g12 = 2x2 + 4x2 + 6x3 + 8x4, f21 = 4\sum j=1 \bigl( x2j - 16xj \bigr) + 214, f22 = x21 - 16x1 + 2x22 - 20x2 + 3x23 - 60x3 + 4x24 - 56x4 + 586, f23 = 4\sum j=1 \bigl( x2j - 20xj \bigr) + 324, ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 1470 D. BHATI, P. SINGH Table 2. Solution of MONSOF optimization problem 2 using proposed method S. No. w1 w2 x1 x2 x3 x4 F1 F2 No. of Iteration CPU Time Sec. 1 0.10 0.90 6.0624 4.0001 8.0000 7.9375 1.1610 - 5.7985 60 3.193 2 0.20 0.80 6.2499 4.0000 8.0001 7.7500 1.0770 - 4.9110 61 3.196 3 0.30 0.70 6.2500 4.0000 8.0000 7.7500 1.0770 - 4.9106 62 3.273 4 \bfzero .\bffour \bfzero \bfzero .\bfsix \bfzero \bfsix .\bfzero \bfzero \bfzero \bfzero \bffour .\bfzero \bfzero \bfzero \bfzero \bfeight .\bfzero \bfzero \bfzero \bfzero \bfsix .\bfzero \bfzero \bfzero \bfzero \bfzero .\bfthree \bfnine \bfzero \bfseven - 2.7833 \bfsix \bfone \bfthree .\bftwo \bfthree \bfsix 5 0.50 0.50 6.0000 4.0000 8.0000 6.0000 0.3907 - 2.7833 64 3.319 6 0.60 0.40 6.0001 4.0000 8.0000 6.0000 0.3907 - 2.7834 61 3.178 7 0.70 0.30 6.0000 4.0000 8.0000 6.0000 0.3907 - 2.7833 63 3.272 8 0.80 0.20 6.0000 4.0000 8.0000 6.0000 0.3907 - 2.7833 61 3.251 9 0.90 0.10 6.0000 4.0000 8.0000 6.0000 0.3907 - 2.7833 63 3.250 g21 = 2x1 - x2 - x3 + x4 + 2, g22 = - x1 + x2 + x2 - x4 + 10, g23 = x21 - 4x4. Using the method proposed in Section 4, the simplex S0 containing \beta . The vertices of S0 are given by (6.0000, 4.0000, 8.0000, 6.0000), (8.0000, 4.0000, 8.0000, 6.0000), (6.0000, 6.0000, 8.0000, 6.0000), (6.0000, 4.0000, 10.0000, 6.0000), (6.0000, 4.0000, 8.0000, 8.0000). After choosing the suitable weights for each objective functions and apply the method then we get the solution of the results which are given in Table 2. By examining the Table 2, the efficient solution of the problem is x1 = 6.0000, x2 = = 4.0000, x3 = 8.0000, x4 = 6.0000 and the value of the objective function at achieved point are F1 = 0.3907, F2 = - 2.7833. 6. Conclusions. Sum of fractional optimization problems are considered to be very difficult even for single objectives only. In this paper, we proposed a weighted sum based branch and bound method for solving multiobjective nonlinear sum of fractional optimization problems. The method is the extension of the method presented by Shen, Duan and Pei [19]. We also designed two numerical problems by combining the numerical problem which are available in the literature and these problems are solved by proposed method which is coded in Matlab (Ver 2014(b)). From the above computational results, we obtain the Pareto optimal solution for the MONSOF (P ) optimization problem. The computational results are summarized in Table 1 and Table 2. References 1. Mathis F. H., Mathis L. J. A nonlinear programming algorithm for hospital management // SIAM Rev. – 1995. – 37. – P. 230 – 234. 2. Schaible S. A note on the sum of a linear and linear fractional functions // Naval. Res. Log. Quaterly. – 1977. – 24. – P. 61 – 963. 3. Benson H. P. Global optimization algorithm for the nonlinear sum of ratios problem // J. Math. Anal. and Appl. – 2001. – 263. – P. 301 – 315. ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11 MULTI-OBJECTIVE NONLINEAR SUM OF FRACTIONAL OPTIMIZATION PROBLEMS . . . 1471 4. Benson H. P. Global optimization algorithm for the nonlinear sum of ratios problem // J. Optim Theory and Appl. – 2002. – 112(1). – P. 1 – 129. 5. Benson H. P. Generating sum of ratios test problems in global optimization // J. Optim Theory and Appl. – 2003. – 119(3). – P. 615 – 621. 6. Benson H. P. On the global optimization of sums of linear fractional functions over a convex set // J. Optim Theory and Appl. – 2004. – 121(1). – P. 19 – 39. 7. Benson H. P. A smplicial branch and bound duality-bounds algorithm for the linear som-of-ratio problem // Eur. J. Oper. Res. – 2007. – 182. – P. 597 – 611. 8. Benson H. P. Solving sum of ratios fractional programs via concave minimization // J. Optim Theory and Appl. – 2007. – 135. – P. 1 – 17. 9. Benson H. P. Branch -and- bound outer approximation algorithms for sum-of-ratios fractional programs // J. Optim Theory and Appl. – 2010. – 146. – P. 1 – 18. 10. Scott C. H., Jefferson T. R. Duality of nonconvex sum of ratios // J. Optim Theory and Appl. – 1998. – 98(1). – P. 151 – 159. 11. Shen P., Jin L. Using canonical partition to globally maximizing the nonlinear sum of ratios // Appl. Math. Model. – 2010. – 34. – P. 2396 – 2413. 12. Wang Y. J., Zhang K. C. Global optimization of nonlinear sum of ratios problem // Appl. Math. and Appl. – 2004. – 158. – P. 319 – 330. 13. Shen P. P., Wang C. F. Global optimization for sum of ratios problem with with coefficient // Appl. Math. and Comput. – 2006. – 176. – P. 219 – 229. 14. Jiao H., Shen P. A note on the paper global optimization of nonlinear sum of ratios // Appl. Math. and Comput. – 2007. – 188. – P. 1812 – 1815. 15. Qu S. J., Zhang K. C., Zhao J. K. An efficient algorithm for globally minimizing sum of quadratics ratios problem with non-convex quadratics constraints // Appl. Math. and Comput. – 2007. – 189. – P. 1624 – 1636. 16. Shen P., Chen Y., Yuan M. Solving sum of quadratic ratios fractional programs via monotonic function // Appl. Math. and Comput. – 2009. – 212. – P. 234 – 244. 17. Shen P., Li W., Bai X. Maximizing for the sum of ratios of two convex functions over a convex set // Comput. and Oper Res. – 2013. – 40. – P. 2301 – 2307. 18. Shen P., Wang C. F. Global optimization for sum of generalization fractional functions // J. Comput. and Appl. Math. – 2008. – 214. – P. 1 – 12. 19. Shen P. P., Duan Y. P., Pei Y. G. A simplicial branch and duality bound algorithm for the sum of convex-convex ratios problem // J. Comput. and Appl. Math. – 2009. – 223. – P. 145 – 158. 20. Jaberipour M., Khorram E. Solving the sum-of-ratios problem by a harmony search algorithm // J. Comput. and Appl. Math. – 2010. – 234. – P. 733 – 742. 21. Jin L., Hou X. P. Global optimization for a class nonlinear sum of ratios problems // Probl. Eng. Article. – 2014. – Article ID: 103569. 22. Gao Y., Jin S. A global optimization algorithm for sum of linear ratios problem // J. Appl. Math. – 2013. – Article ID: 276245. 23. Freund R. W., Jarre F. Solving the sum-of-ratios problem by an interior-point method // J. Global Optim. – 2001. – 19. – P. 83 – 102. 24. Schaible S., Shi J. Fractional programming: the sum-of-ratio case // Optim. Method and Software. – 2003. – 18(2). – P. 219 – 229. 25. Carlsson J. G., Shi J. A linear relaxation algorithm for solving the sum of linear ratios problem with lower dimension // Oper Res. Lett. – 2013. – 41. – P. 381 – 389. 26. Ashtiani A. M., Ferreira A. V. A branch and cut algorithm for a class of sum of ratios problems // Appl. Math. and Comput. – 2015. – 268. – P. 596 – 608. 27. Horst R., Tuy H. Global optimization: deterministic approaches. – Berlin: Springer, 1996. Received 15.02.16 ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 11
id umjimathkievua-article-1795
institution Ukrains’kyi Matematychnyi Zhurnal
keywords_txt_mv keywords
language English
last_indexed 2026-03-24T02:12:48Z
publishDate 2017
publisher Institute of Mathematics, NAS of Ukraine
record_format ojs
resource_txt_mv umjimathkievua/75/8b583bd5bdcebec098f9a330f0a71675.pdf
spelling umjimathkievua-article-17952019-12-05T09:27:02Z Multi-objective nonlinear sum of fractional optimization problems with non-convex constraints using duality based branch and bound algorithm Мультi-об’єктна нелiнiйна сума дробових оптимiзацiйних проблем з неопуклими обмеженнями з використанням дуального алгоритму гiлок та границь Bhati, D. Singh, P. Бхаті, Д. Сінг, П. The present paper investigates the solution of multiobjective nonlinear sum of fractional optimization problems. A duality based branch and bound cut method is developed to obtain efficient solution and the methodology is validate by proving the theoretical assertions for the solution. The present method is the extension of the work P. P. Shen, Y. P. Duan and Y. G. Pei[19] which developed for single objective sum of ratios nonlinear optimization problem. The proposed method is coded in matlab (version 2014b). Two numerical problems are considered for solving by using the proposed method and global optimal solution is obtained. В роботi вивчається розв’язок мультi-об’єктної нелiнiйної суми дробових оптимiзацiйних проблем. Для ефективного розв’язування таких проблем в роботi розроблено дуальний алгоритм гiлок та границь. Запропонована методологiя обгрунтована доведенням необхiдних теоретичних тверджень. Метод, що застосовано, є узагальненням роботи П. П. Шена, I. П. Дуана та I. Г. Пея (повне посилання) для однооб’єктної нелiнiйної оптимiзацiйної задачi про суми вiдношень. Цей метод реалiзовано в MatLab (версiя 2014b). Двi числовi проблеми розглянуто i розв’язано за допомогою цього методу та отримано їх глобальнi оптимальнi розв’язки. Institute of Mathematics, NAS of Ukraine 2017-11-25 Article Article application/pdf https://umj.imath.kiev.ua/index.php/umj/article/view/1795 Ukrains’kyi Matematychnyi Zhurnal; Vol. 69 No. 11 (2017); 1455-1471 Український математичний журнал; Том 69 № 11 (2017); 1455-1471 1027-3190 en https://umj.imath.kiev.ua/index.php/umj/article/view/1795/777 Copyright (c) 2017 Bhati D.; Singh P.
spellingShingle Bhati, D.
Singh, P.
Бхаті, Д.
Сінг, П.
Multi-objective nonlinear sum of fractional optimization problems with non-convex constraints using duality based branch and bound algorithm
title Multi-objective nonlinear sum of fractional optimization problems with non-convex constraints using duality based branch and bound algorithm
title_alt Мультi-об’єктна нелiнiйна сума дробових оптимiзацiйних проблем з неопуклими обмеженнями з використанням дуального алгоритму гiлок та границь
title_full Multi-objective nonlinear sum of fractional optimization problems with non-convex constraints using duality based branch and bound algorithm
title_fullStr Multi-objective nonlinear sum of fractional optimization problems with non-convex constraints using duality based branch and bound algorithm
title_full_unstemmed Multi-objective nonlinear sum of fractional optimization problems with non-convex constraints using duality based branch and bound algorithm
title_short Multi-objective nonlinear sum of fractional optimization problems with non-convex constraints using duality based branch and bound algorithm
title_sort multi-objective nonlinear sum of fractional optimization problems with non-convex constraints using duality based branch and bound algorithm
url https://umj.imath.kiev.ua/index.php/umj/article/view/1795
work_keys_str_mv AT bhatid multiobjectivenonlinearsumoffractionaloptimizationproblemswithnonconvexconstraintsusingdualitybasedbranchandboundalgorithm
AT singhp multiobjectivenonlinearsumoffractionaloptimizationproblemswithnonconvexconstraintsusingdualitybasedbranchandboundalgorithm
AT bhatíd multiobjectivenonlinearsumoffractionaloptimizationproblemswithnonconvexconstraintsusingdualitybasedbranchandboundalgorithm
AT síngp multiobjectivenonlinearsumoffractionaloptimizationproblemswithnonconvexconstraintsusingdualitybasedbranchandboundalgorithm
AT bhatid mulʹtiobêktnanelinijnasumadrobovihoptimizacijnihproblemzneopuklimiobmežennâmizvikoristannâmdualʹnogoalgoritmugiloktagranicʹ
AT singhp mulʹtiobêktnanelinijnasumadrobovihoptimizacijnihproblemzneopuklimiobmežennâmizvikoristannâmdualʹnogoalgoritmugiloktagranicʹ
AT bhatíd mulʹtiobêktnanelinijnasumadrobovihoptimizacijnihproblemzneopuklimiobmežennâmizvikoristannâmdualʹnogoalgoritmugiloktagranicʹ
AT síngp mulʹtiobêktnanelinijnasumadrobovihoptimizacijnihproblemzneopuklimiobmežennâmizvikoristannâmdualʹnogoalgoritmugiloktagranicʹ