CEDRIC - OC RSS feed
http://cedric.cnam.fr/rss/OC.xml
frThu, 10 Jan 2019 22:35:07 +0100https://cedric.cnam.fr/index.php/publis/article/view?id=4407
https://cedric.cnam.fr/index.php/publis/article/view?id=4407
An FPT Algorithm for Planar Multicuts with Sources and Sinks on the Outer FaceGiven a list of k source–sink pairs in an edge-weighted graph G, the minimum multicut problem consists in selecting a set of edges of minimum total weight in G, such that removing these edges leaves no path from each source to its corresponding sink. To the best of our knowledge, no non-trivial FPT result for special cases of this problem, which is APX-hard in general graphs for any fixed k≥3, is known with respect to k only. When the graph G is planar, this problem is known to be polynomial-time solvable if k=O(1), but cannot be FPT with respect to k under the Exponential Time Hypothesis. In this paper, we show that, if G is planar and in addition all sources and sinks lie on the outer face, then this problem does admit an FPT algorithm when parameterized by k (although it remains APX-hard when k is part of the input, even in stars). To do this, we provide a new characterization of optimal solutions in this case, and then use it to design a "divide-and-conquer" approach: namely, some edges that are part of any such solution actually define an optimal solution for a polynomial-time solvable multiterminal variant of the problem on some of the sources and sinks (which can be identified thanks to a reduced enumeration phase). Removing these edges from the graph cuts it into several smaller instances, which can then be solved recursively.Thu, 10 Jan 2019 22:35:07 +0100OCPaperhttps://cedric.cnam.fr/index.php/labo/membre/zales
https://cedric.cnam.fr/index.php/labo/membre/zales
Zacharie AlesaThu, 15 Nov 2018 10:20:57 +0100OCJobhttps://cedric.cnam.fr/index.php/publis/article/view?id=4392
https://cedric.cnam.fr/index.php/publis/article/view?id=4392
Phylogenetic conservation prioritization with uncertaintyWe consider a set of species S and are interested in the assessment of the subsets of S from
a phylogenetic diversity viewpoint. Several measures can be used for this assessment. Here
we have retained phylogenetic diversity (PD) in the sense of Faith, a measure widely used
to reflect the evolutionary history accumulated by a group of species. The PD of a group of
species X included in S is easy to calculate when the phylogenetic tree associated with S is
perfectly known but this situation is rarely verified. We are interested here in cases where
uncertainty regarding the length of branches and the topology of the tree is reflected in the
fact that several phylogenetic trees are considered to be plausible for the set S. We propose
several measures of the phylogenetic diversity to take account of the uncertainty arising
from this situation. A natural problem in the field of biological conservation is to select
the best subset of species to protect from a group of threatened species. Here, the best subset
is the one that optimizes the proposed measures. We show how to solve these optimal
selection problems by integer linear programming. The approach is illustrated by several
examples.Wed, 14 Nov 2018 13:22:33 +0100OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4377
https://cedric.cnam.fr/index.php/publis/article/view?id=4377
Non-convex Quadratic IntegerProgramming: a piecewise linearizationWe address in this talk Non-convex Quadratic Integer Programming (NCQIP). More precisely we consider a problem in which the objective function is a quadratic non-convex one with pure general integer variables and linear constraints. The method proposed here generalizes a previous work addressing Convex QIP. We propose a general method to solve such problems that first transforms the problem into a mixed separable one, still non-convex. The quadratic part of the objective function becomes a weighted sum of squared variables, with no more products of two variables. This first transformation is done by diagonalizing the Hessian matrix of the initial objective function and requires new real variables and linear number of added constraints. Then we propose to use a parametric piecewise linearization of the equivalent problem. This linearization allows us to find the optimum of the initial problem when the number of line segments asymptotically grows. Experimentations are presented, in both convex and non-convex context, and extensions to quadratic constraints are discussed. Wed, 24 Oct 2018 18:23:37 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4372
https://cedric.cnam.fr/index.php/publis/article/view?id=4372
Trains do not vanish: the ROADEF/EURO challenge 2014The ROADEF/EURO challenge is a contest jointly organized by the French Operational Research and Decision Aid society (ROADEF) and the European Operational Research society (EURO). The contest has appeared on a regular basis since 1999 and always concerns an applied optimization problem proposed by an industrial partner. The 2014 edition of the ROADEF/EURO challenge was led by the Innovation & Research department of SNCF, a global leader in passenger and freight transport services, and infrastructure manager of the French railway network. The objective of the challenge was to find the best way to store and move trains on large railway sites, between their arrivals and departures. Since trains never vanish and traffic continues to increase, in recent years some stations have been having real congestion issues. Train management in large railway sites is of high interest for SNCF, which is why it was submitted to the operations research community as the industrial problem for the 2014 edition of the ROADEF/EURO challenge. This paper introduces the special section of the Annals of Operations Research volume devoted to the ROADEF/EURO challenge 2014, as well as the methods of the finalist teams and their results. Mon, 01 Oct 2018 09:59:43 +0200OCPaperhttp://cedric.cnam.fr/index.php/publis/article/view?id=4360
http://cedric.cnam.fr/index.php/publis/article/view?id=4360
Estimating Daily Evaporation from Poorly – Monitored Lakes using limited Meteorological Data
Open water evaporation is influenced by several meteorological parameters such as: irradiance, soil
temperature, relative humidity, atmospheric pressure, and wind speed. However, dealing with that
matter, in a case of measurements scarcity, is a challenging task. To overcome this problem, the
authors sought a less-dimensional method to estimate lake evaporation. This technique takes into
account only three weather variables: Temperature, Relative Humidity and Dew point. In fact, the
approach is summarized as follows: 1- using Levenberg-Marquardt algorithm, a Nonlinear
Regression Model based on Magnus formula is trained and tested to estimate the dew point. 2- a
simplified Penman formula provides an estimate of the lake evaporation rate. To test approach
effectiveness, the suggested method was applied on Qaraoun Lake – Lebanon. Upon testing, the
regression model exhibited high accuracy with a goodness of fit value equal to 0.99. Afterward, the
evaporation rates were estimated using Penman formula. Unfortunately, evaporation measurements
are not available on site to carry the testing procedures. Instead, outcomes were compared with the
monthly evaporation average retrieved from the nearest region to the lake. Estimated rates were
reasonably good with a correlation coefficient equal to 0.8. Overall, achieved results were reliable
enough to carry out a further assessment of the economic impact of evaporation losses from
Qaraoun reservoir on hydropower generationTue, 11 Sep 2018 08:01:14 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4356
https://cedric.cnam.fr/index.php/publis/article/view?id=4356
Optimal multi-crop planning implemented under deficit irrigationMulti-Crop planning (MCP) optimization model for cropping pattern and water allocation is introduced as a nonlinear programming problem. Its solution promotes an efficient use of water with a flexibility to keep the chosen crops at either full or deficit irrigation throughout different stages so that the net financial return is maximized within certain production bounds and resources constraints. The problem-solution approach is as follows: at first a preliminary mathematical tools are presented involving existence, benchmark linear models and a relaxation formulation, second two meta-heuristic algorithms Simulated Annealing (SA) and Particle Swarm Optimization (PSO) are implemented as a numerical technique for solving the MCP problem. The particularity of our approach consists of using the solution of the linear problem as an initial guess for the SA, while for PSO the particle swarm is initiated in the neighborhood of that solution.Mon, 10 Sep 2018 19:19:02 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4355
https://cedric.cnam.fr/index.php/publis/article/view?id=4355
Short-Term Hydro Generation Scheduling of Cascade Plants Operating on Litani River ProjectShort-term hydro generation optimal scheduling of cascade hydropower stations is a typical nonlinear programming problem that minimizes the deviation between the produced power and the grid requirement while satisfying hydraulic and electrical constraints. In this work, the main objective is to find the optimal hourly water discharge rate of each hydro station in a multi-reservoir system to minimize the power deficit. The demanded load is then distributed among the working units of three cascade hydropower stations constructed on Litaini River - Lebanon: Markaba, Awali and Charles Helou. To achieve our goal, methods involving and joining data mining and mathematical programming will serve as a base for a Decision Support Tool (DST). Based on the DST features, a distributed control structure is implemented using a multiple software framework. MATLAB is used for mathematical optimization while LabVIEW is employed to develop Human Machine Interface (HMI) for sequential control.Mon, 10 Sep 2018 18:26:50 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4354
https://cedric.cnam.fr/index.php/publis/article/view?id=4354
"Daily River Flow Prediction Based on Two Phase Constructive Fuzzy Systems Modeling: A Case of Hydrological - Meteorological Measurements AsymmetryAccurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley – Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.Mon, 10 Sep 2018 14:39:52 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4351
https://cedric.cnam.fr/index.php/publis/article/view?id=4351
Ordonnancement sous contraintes d'énergie avec stockage et coûts linéaires par morceauxWed, 05 Sep 2018 23:13:03 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4350
https://cedric.cnam.fr/index.php/publis/article/view?id=4350
Lot-sizing for remanufacturing under uncertainty: a stochastic multi-stage mixed-integer programming approachWed, 05 Sep 2018 23:09:47 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4349
https://cedric.cnam.fr/index.php/publis/article/view?id=4349
Lot-sizing for remanufacturing under uncertainty: a stochastic multi-stage mixed-integer programming approach Wed, 05 Sep 2018 23:04:42 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4348
https://cedric.cnam.fr/index.php/publis/article/view?id=4348
MIP Formulations for Just-in-Time Scheduling with Common Due-DateWed, 05 Sep 2018 23:01:44 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4347
https://cedric.cnam.fr/index.php/publis/article/view?id=4347
Scheduling energy-consuming jobs on parallel machines with piecewise-linear costs and storage resources: A lot-sizing and scheduling perspectiveWed, 05 Sep 2018 22:58:43 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4346
https://cedric.cnam.fr/index.php/publis/article/view?id=4346
Decomposition method in a scheduling problem with energy storage and costsWe consider a scheduling and energy sources assignment
problem abstracted from several applications including data,
smart buildings, hybrid-vehicles and manufacturing. A set of
pre-emptive jobs has to be scheduled on a set machines. The
energy consumed by a machine has a fixed part (if switched on/off
) and a variable part depending on the tasks in process.
Each task requires a known energy amount when processed
on a given machine. A given schedule therefore induces a
time-dependent total energy demand. Two energy sources are
available to supply the demand. One is a reversible source,
able to produce and retrieve energy assuming a limited capac-
ity. The other is a non-reversible source of infinite capacity,
only able to produce energy, but its usage comes with a cost
expressed as a time-dependent piecewise linear function of
the energy supplied. The objective is to minimize the total
energy cost. We present original lot sizing and scheduling decomposition approaches,
analysis of sub-problems and computational resultsWed, 05 Sep 2018 22:54:09 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4345
https://cedric.cnam.fr/index.php/publis/article/view?id=4345
Valid inequalities for solving a stochastic lot-sizing problem with returnsWe seek to plan production activities on a re-manufacturing
system over a multi-period horizon. The system comprises
three processes; disassembly, refurbishing and reassembly.
Uncertainties are assumed on the quality and quantity of re-
turned products, customers demand and production costs.
This leads to a multi-echelon stochastic lot-sizing problem
with product returs and lost sales minimizing the total ex-
pected production costs. We propose a multi-stage stochastic
integer programming approach relying on a scenario tree to
represent the uncertain information structure, resulting in the
formulation of a large-size MILP. New valid tree inequalities
are obtained by mixing previously known path inequalities.
They are used in a Branch-and-Cut framework to solve the
problem. Computational results will illustrate the effectivness
of the proposed method. The number of instances solved to
optimality is increased by a factor of 1.8 as compared to the
use of the commercial solver CPLEX.Wed, 05 Sep 2018 22:47:28 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4344
https://cedric.cnam.fr/index.php/publis/article/view?id=4344
Extreme points for scheduling around a common due dateWe study a single machine just-in-time scheduling problem with a polyhedral approach. The aim is to minimize the weighted sum of earliness and tardiness penalties around a common due date. An instance is considered unrestrictive if all the tasks can be scheduled before the due date. In this case,
some dominance properties allows to efficiently solve some particular instances. In the general case, some of these dominance properties are not still valid. For the unrestrictive case, we provide both compact and non-compact formulations. The latter one is extended to the general case. For the non-compact
formulations, a vector satisfying all the constraints can correspond to an unfeasible schedule. However, we ensure that the extreme points of the associated polyhedra correspond to
feasible schedules satisfying dominance properties. We prove
that the separation problem of these two formulations reduces
to min-cut problem, leading to a polynomial time algorithm.
Some experimental results will illustrate the effectiveness of
these formulations.Wed, 05 Sep 2018 22:43:49 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4343
https://cedric.cnam.fr/index.php/publis/article/view?id=4343
The single-item green lot-sizing problem with fixed carbon emissionsThis presentation is based on our paper in EJOR (Absi et al., 2016),
which considers a single-item lot sizing problem with a periodic car-
bon emission constraint. In each period, the carbon emission constraint
defines an upper limit on the average emission per product. Di
ff
erent
supply modes are available, each one characterized by its own cost
and carbon emission parameters. The problem consists in selecting the
modes used in each period such that no carbon emission constraint is
violated, and the cost of satisfying all the demands on a given time
horizon is minimized. This problem, introduced in Absi et al. (2013),
has been shown polynomially solvable when only unit carbon emis-
sions are considered. In this work, we extend the analysis to the real-
istic case of a fixed carbon emission associated to each mode, in addi-
tion to its unit carbon emission. We show that the resulting problem
is NP-hard. Several dominant properties are presented, and two dy-
namic programming algorithms are proposed. We also establish that
the problem can be solved in polynomial time for a fixed number of
modes when carbon emission parameters are stationary. The presenta-
tion will end with a discussion on the extension of the problem to the
multi-item case.Wed, 05 Sep 2018 22:37:53 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4342
https://cedric.cnam.fr/index.php/publis/article/view?id=4342
Stochastic lot-sizing for remanufacturing planning with lost sales and returnsWe propose a branch-and-cut framework to solve the resulting large-size mixed integer
linear program. The algorithm relies on a new set of valid inequalities obtained by
mixing previously known path inequalities. These valid inequalities increase
exponentially in the number of nodes in the scenario tree. We provide an efficient
cutting-plane generation strategy to identify the useful subset of this class.
Our computational experiments show that the proposed method is capable of significantly
decreasing the computation time needed to obtain guaranteed optimal solutions.Wed, 05 Sep 2018 16:32:07 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4341
https://cedric.cnam.fr/index.php/publis/article/view?id=4341
The Unit-capacity Constrained Permutation ProblemThe Unit-capacity Constrained Permutation Problem (UCPP) is to find a sequence of moves for pieces over a set of locations. From a given location, a piece can be moved towards a location with a unit-capacity constraint, i.e. the latter location must be free of its original piece. Each piece has a specific type and at the end every location must contain a piece of a required type. A piece must be handled using a specific tool incurring a setup cost whenever a tool changeover is required. The aim of the UCPP is finding a sequence of moves with a minimum total setup cost. This problem arises in the Nuclear power plant Fuel Renewal Problem (NFRP) where locations correspond to fuel assemblies and pieces to fuel assembly inserts. We first show that the UCPP is NP-hard. We exhibit some symmetry and dominance properties and propose a dynamic programming algorithm. Using this algorithm, we prove that the UCPP is polynomial when two tools and two types are considered. Experimental results showing the efficiency of the algorithm for some instances coming from the NFRP are presented.Wed, 05 Sep 2018 14:03:50 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4340
https://cedric.cnam.fr/index.php/publis/article/view?id=4340
A Family of Scheduling Algorithms for Hybrid Parallel PlatformsMore and more parallel computing platforms are built upon hybrid architectures combining multi-core processors (CPUs) and hardware accelerators like General Purpose Graphics Processing Units (GPGPUs). We present in this paper a new method for scheduling efficiently parallel applications with m
CPUs and k GPGPUs, where each task of the application can be processed either on an usual core (CPU) or on a GPGPU. We consider the problem of scheduling n independent tasks with the objective to minimize the time for completing the whole application (makespan). This problem is NP-hard, thus, we present two families of approximation algorithms that can achieve approximation ratios of 2q+12q+𝜖 or 2(q+1)2q+1+𝜖 for any integer q≥1 when only one GPGPU is considered, and 2q+12q+12qk+𝜖 or 2(q+1)2q+1+1(2q+1)k+𝜖 for k≥2 GPGPUs, where 𝜖 is an arbitrary small value which corresponds to the target accuracy of a binary search. The proposed method is based on a dual approximation scheme that uses a dynamic programming algorithm. The associated computational costs are for the first (resp. second) family in 𝒪(n2kq+1mq) (resp. 𝒪(n2kq+2mq+1)) per step of dual approximation. The greater the value of parameter q, the better the approximation, but the more expensive the computational cost. Finally, we propose a relaxed version of the algorithm which achieves a running time in 𝒪(nlogn) with a constant approximation bound of 2. This last result is compared to the state-of-the-art algorithm HEFT. The proposed solving method is the first general purpose algorithm for scheduling on hybrid machines with a theoretical performance guarantee that can be used for practical purposes.Wed, 05 Sep 2018 13:43:58 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4339
https://cedric.cnam.fr/index.php/publis/article/view?id=4339
Two-level lot-sizing with inventory boundsWe study a two-level uncapacitated lot-sizing problem with inventory bounds that occurs in a supply chain composed of a supplier and a retailer. The first level with the demands is the retailer level and the second one is the supplier level. The aim is to minimize the cost of the supply chain so as to satisfy the demands when the quantity of item that can be held in inventory at each period is limited. The inventory bounds can be imposed at the retailer level, at the supplier level or at both levels. We propose a polynomial dynamic programming algorithm to solve this problem when the inventory bounds are set on the retailer level. When the inventory bounds are set on the supplier level, we show that the problem is NP-hard. We give a pseudo-polynomial algorithm which solves this problem when there are inventory bounds on both levels. In the case where demand lot-splitting is not allowed, i.e. each demand has to be satisfied by a single order, we prove that the uncapacitated lot-sizing problem with inventory bounds is strongly NP-hard. This implies that the two-level lot-sizing problems with inventory bounds are also strongly NP-hard when demand lot-splitting is considered.Wed, 05 Sep 2018 11:55:03 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4338
https://cedric.cnam.fr/index.php/publis/article/view?id=4338
Single-Machine Common Due Date Total Earliness/Tardiness Scheduling with Machine Unavailability Research on non-regular performance measures is at best scarce in the deterministic machine scheduling literature with machine unavailability constraints. Moreover, almost all existing works in this area assume either that processing on jobs interrupted by an interval of machine unavailability may be resumed without any additional setup/processing or that all prior processing is lost. In this work, we intend to partially fill these gaps by studying the problem of scheduling a single machine as to minimize the total deviation of the job completion times from an unrestrictive common due date when one or several fixed intervals of unavailability are present in the planning horizon. We also put a serious effort into investigating models with semi-resumable jobs so that processing on a job interrupted by an interval of machine unavailability may later be resumed at the expense of some extra processing time. The conventional assumptions regarding resumability are also taken into account. Several interesting cases are identified and explored, depending on the resumability scheme and the location of the interval of machine unavailability with respect to the common due date. The focus of analysis is on structural properties and drawing the boundary between polynomially solvable and NP-complete cases. Pseudo-polynomial dynamic programming algorithms are devised for NP-complete variants in the ordinary sense. Tue, 04 Sep 2018 18:36:51 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4327
https://cedric.cnam.fr/index.php/publis/article/view?id=4327
Critical Vertices and Edges in H-free GraphsA vertex or edge in a graph is critical if its deletion reduces the chromatic number of the graph by one. We consider the problems of deciding whether a graph has a critical vertex or edge, respectively. We give a complexity dichotomy for both problems restricted to H-free graphs, that is, graphs with no induced subgraph isomorphic to H. Moreover, we show that an edge is critical if and only if its contraction reduces the chromatic number by one. Hence, we also obtain a complexity dichotomy for the problem of deciding if a graph has an edge whose contraction reduces the chromatic number by one.Fri, 24 Aug 2018 11:31:54 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4315
https://cedric.cnam.fr/index.php/publis/article/view?id=4315
Estimating daily evaporation from poorly-monitored lakes using limited meteorological data: A case study within Qaraoun dam - LebanonOpen water evaporation is influenced by several meteorological parameters such as: irradiance, soil temperature, relative humidity, atmospheric pressure and wind speed. However, dealing with that matter, in a case of measurements scarcity, is a challenging task. To overcome this problem, the authors sought a less-dimensional method to estimate lake evaporation. This technique takes into account only three weather variables: temperature, relative Humidity and dew point. In fact, the approach is summarized as follows: 1- using Levenberg-Marquardt algorithm, a Nonlinear Regression model based on Magnus formula is trained and tested to estimate the dew point. 2- a simplified Penman formula provides an estimate for the lake evaporation rate. To test the approach effectiveness, the suggested method was applied on Qaraoun Lake – Lebanon. Upon testing, the regression model exhibited high accuracy with a goodness of fit value equal to 0.99. Afterwards, the evaporation rates were estimated using Penman formula. Unfortunately, evaporation measurements are not available on site to carry the testing procedures. Instead, outcomes were investigated and compared with the monthly evaporation average retrieved from the nearest region to the lake. Estimated rates were reasonably good with a correlation coefficient equal to 0.89 and mean absolute percentage error around 9.8%. At the final stage of this study, sensitivity analysis is performed to quantify the impact of temperature and relative humidity change on evaporation. Overall, the achieved results were reliable enough to carry out a further assessment of the economic impact of evaporation losses from Qaraoun reservoir on the hydropower generation and on the irrigation sector.Thu, 26 Jul 2018 20:13:39 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4309
https://cedric.cnam.fr/index.php/publis/article/view?id=4309
Cutting Planes by Projecting Interior Points onto Polytope FacetsJust hopefully something new just and a bit better for Cutting Planes.Wed, 18 Jul 2018 22:08:34 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4299
https://cedric.cnam.fr/index.php/publis/article/view?id=4299
Valid inequalities for QCQPsWe consider the solution of mixed-integer quadratically constrained programs programs (QCQP). In our algorithm, we introduce valid inequalities, that we seperate during a convexification process. This convex formulation is then used within a spatial B& B. As expected, the valid inequalities improve the gap at the root node of the spatial B& B, and by construction, they favorably impact the behavior of the whole algorithm.Thu, 12 Jul 2018 11:00:10 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4298
https://cedric.cnam.fr/index.php/publis/article/view?id=4298
Unconstrained 0-1 polynomial optimization through convex quadratic reformulationThis paper addresses the resolution of unconstrained binary polynomial programs (P ). We
propose a new 3-phases algorithm to solve (P ). The first phase consists in reformulating (P )
into a quadratic program (QP ) using standard linearization inequalities. In the second phase,
we reformulate (P ) into a convex quadratic program (QP C). This convexification is computed
thanks to a semidefinite relaxation. We compute the optimal value of the continuous relaxa-
tion of (QP C) using the binary identity. Moreover, in order to start the third phase (Branch
and Bound phase) with a tight bound, we use new valid equalities depending on the chosen
quadratization. These equalities highly increase the quality of the bound as it will be shown
by testing our method on several benchmark instances and comparing it to other polynomial
solvers.Tue, 10 Jul 2018 13:49:35 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4295
https://cedric.cnam.fr/index.php/publis/article/view?id=4295
Contraction and Deletion Blockers for Perfect Graphs and H-free GraphsWe study the following problem: for given integers d, k and grap G, can we reduce some fixed graph parameter \pi of G by at least d via at most k graph operations from some fixed set S? As parameters we take the chromatic number, clique number and independence number, and as operations we choose edge contraction and vertex deletion .
We determine the complexity of this problem for a number of subclasses of perfect graphs.Mon, 18 Jun 2018 12:00:42 +0200OCPaperhttps://cedric.cnam.fr/index.php/publis/article/view?id=4293
https://cedric.cnam.fr/index.php/publis/article/view?id=4293
Familles de facettes pour le polytope du sac-a-dos quadratique en 0-1Fri, 15 Jun 2018 17:50:16 +0200OCPaper