Introduction

Hello! You reached the AbSolute book. AbSolute is a constraint solver written in OCaml for research purposes, but we hope that it can be used outside of research as well! The main originality of this solver is to be based on abstract interpretation, which is a field studying static analysis of programs. We and others have discovered similitudes between constraint programming and abstract interpretation at the theoretical level, and this solver is a practical experimentation of these findings. In this book, you will learn how to install AbSolute and how to solve your own constraint problems. We also include a guide to kobe which is a benchmarking tool with support for AbSolute and other solvers. For adventurous users, we provide a guide for contributing to Absolute, as well as some notes about the research relevant to this project.

This project is available on github.

Highlight of AbSolute features

An abstract domain encapsulates a solver for a specific constraint language. We summarize the supported abstract domains in the table below (c1,c2 are constraints, X,Y variables and a,b constants). Each abstract domain is parametrized by a bound type Z, F or Q if it contains variables over integers, floating point numbers or rational.

Abstract domain Constraint language Propagation Description
Box(V) Interval constraints (X <= a, X >= b) None Simple Cartesian product of variable domains. Box is parametric in a variable domain V (interval, open/close interval, or BDD).
SAT Propositional formula ((X \/ Y) /\ (true \/ Z)) Conflict driven clause learning Based on Minisat. This domain is experimental (and probably bugged!).
Octagon(B) Logic difference constraints (X - Y <= a) Incremental Floyd-Warshall algorithm B is the bound type.

In addition, new abstract domains can be derived from existing ones using abstract transformers.

Abstract transformers Constraint language Propagation Description
Direct_product(A1,..,An) Union of A1,...,An. Component-wise propagation of each subdomains. In addition to the classical direct product of abstract interpretation, it has some facilities for transformers that shares subdomains.
Logic_completion(A) Quantifier-free logic formula where predicates are constraints in A Based on the entailment of constraint in A. Equip an abstract domain A with logical formula. Also called "natural domain SMT".
Propagator_completion(A) Arithmetic constraints, functions (sin, cos, sqrt,...). HC4 algorithm Equip an abstract domain A with propagators (functions implementing arithmetic constraints).
Delayed_product(A,B) Union of A and B. Transfer over-approximation A-constraints into B, and then an exact approximation whenever the constraint is instantiated enough. Can be used to transfer non-linear constraints to a linear solver whenever the non-linear variables are instantiated.
Event_loop(A1,...,An) None Propagation loop of the propagators in A1,...,An Meta abstract domain, computing the closure of A1...An more efficiently than Direct_product. Ai closure must be decomposable into propagators (i.e. sub-closure operators).

In addition, each domain is equipped with numerous splitting strategies that can impact the performance.

What to expect next?

A linear programming solver such as VPL, or the one of Apron should be added. The next big step is to integrate ideas from abstract conflict driven learning (ACDL) to support conflicts learning.

AbSolute ecosystem

We have several projects relevant to AbSolute that are worth checking out:

  • kobe: Constraint benchmarking tools suite. It is described in more depth in the benchmarking chapter.
  • minisatml: SAT solver based on MiniSAT, note that this project is a library not directly executable.
  • ptal.github.io: The repository hosting this book.
  • solvers-opam: The OPAM packages repository containing libraries and executables surrounding constraint solving.

Getting Started

The installation process should be easy, but do not hesitate to reach out for help if needed.

Requirements

The following is a list of the dependencies to build AbSolute; note that we explain below how to install OCaml if you do not have it already.

Installation

We install OCaml and AbSolute through the OCaml package manager opam. First, install opam and the necessary libraries with your package manager and initialize opam:

sudo apt-get install libgmp3-dev
sudo apt-get install libmpfr-dev
sudo apt-get install opam # on Debian, see opam documentation for other distributions.
opam init --comp 4.09.0+flambda # Initialize ~/.opam with a freshly compiled OCaml 4.07.1

The next step is to download and build AbSolute. If you intent to modify the source code and possibly contribute to the project, jump to the contributing chapter. Otherwise, you can install it from opam:

opam repo add solvers https://github.com/ptal/solvers-opam.git
opam install absolute

AbSolute is currently only a library project, so it must be used in an OCaml project, which we explain in the next chapter.

To update absolute (and any OPAM package) to the latest version, you can simply type:

opam update
opam install absolute

Learn AbSolute

We show step-by-step how to use AbSolute in order to solve a constraint problems. This code presented in this chapter is available online at absolute-demo. This project can be forked and used as a template for your own project.

This simple introduction cannot serve as a full introduction to constraint programming, and we refer to this online class to learn how to model a constraint problem.

In this tutorial, you will learn how to solve the famous n-queens problem in AbSolute. It encompasses the following steps:

  1. Create a model of the N-Queens using the structures of AbSolute.
  2. Build a constraint solver by assembling abstract domains together that is able to solve this problem.
  3. Type the formula created in (1) into the abstract domain.
  4. Run the solver and print the solution.

Model of the N-Queens problem

The N-Queens problem is a toy constraint problem where we try to place n queens on a chess board of size n*n such that no queen can attack each other. A complete description is available on Wikipedia.

A simple model for this problem is constituted of n variables x1,...,xN, one for each queen. Since, it is only possible to have one queen per line on the chess board, we can already decide that x1 is on the first line, x2 on the second, etc. The following function creates a list of string variables:


# #![allow(unused_variables)]
#fn main() {
(** [make_vars n] creates `n` variables named `x1`,...,`xn`. *)
let rec make_vars = function
  | 0 -> []
  | n -> ("x" ^ (string_of_int n))::(make_vars (n-1))
#}

The value of the variable xi will be the column of xi on the chess board. Therefore, a queen variable can take a value between 1 and n. These corresponds to constraints of the form x1 >= 0 /\ x1 <= n, which translates in OCaml as:


# #![allow(unused_variables)]
#fn main() {
(** [var_in_dom l u x] ensures that `x >= l` and `x <= u`. *)
let var_in_dom l u x =
  [Cmp(Var x, GEQ, Cst(Bound_rat.of_int l, Int));
   Cmp(Var x, LEQ, Cst(Bound_rat.of_int u, Int))]

(** [vars_in_dom l u vars] ensures that all variables in `vars` are in the interval [l..u]. *)
let rec vars_in_dom l u = function
  | [] -> []
  | x::vars -> (var_in_dom l u x)@(vars_in_dom l u vars)
#}

Finally, we need to ensure that two queens cannot attack each other. For instance, to check that xi and xj do not attack each other, we need to fulfill the following constraints (i < j):

  • xi != xj: xi and xj are not on the same column.
  • qi + i != qj + j: xi and xj are not on a same ascending diagonal.
  • qi - i != qj - j: xi and xj are not on a same descending diagonal.

This translates into OCaml as follows:


# #![allow(unused_variables)]
#fn main() {
let noattack vars i j =
  let qi, qj = List.nth vars i, List.nth vars j in
  let ci = Cst(Bound_rat.of_int i, Int) in
  let cj = Cst(Bound_rat.of_int j, Int) in
  [Cmp(Var qi, NEQ, Var qj);
   Cmp(Binary(Var qi, ADD, ci), NEQ, Binary(Var qj, ADD, cj));
   Cmp(Binary(Var qi, SUB, ci), NEQ, Binary(Var qj, SUB, cj))]

let rec noattack_i vars i j =
  if j >= List.length vars then []
  else (noattack vars i j)@(noattack_i vars i (j+1))

let noattack_all vars =
  let rec aux i =
    if i >= List.length vars then []
    else (noattack_i vars i (i+1))@(aux (i+1)) in
  aux 0
#}

By assembling all these constraints, we can design the full formula for the n-queens problem:


# #![allow(unused_variables)]
#fn main() {
let make_nqueens n =
  let vars = List.rev (make_vars n) in
  let domains = List.rev (vars_in_dom 1 n vars) in
  let constraints = List.rev (noattack_all vars) in
  let rec make_formula = function
    | [] -> QFFormula (conjunction (domains@constraints))
    | x::vars -> Exists(x, Concrete Int, make_formula vars) in
  make_formula vars
#}

It generates a conjunctive formula with variables existentially quantified, for instance: Ǝx1.Ǝx2...Ǝxn.x1 >= 1 /\ ... /\ x7 - 6 <> x8 - 7.

An abstract domain for the N-Queens problem

AbSolute is designed to be modular, thus you can craft your own constraint solver accordingly to your problem. We design an abstract domain one supporting integer arithmetic constraints:

module Box = Box.Make(Bound_int)(Itv.Itv)(Box_split.First_fail_LB)
module PC = Propagator_completion(Box.Vardom)(Box)
module E = Event_loop(Event_atom(PC))
module A = Direct_product(
  Prod_cons(Box)(
  Prod_cons(PC)(
  Prod_atom(E))))

let box_uid = 1
let pc_uid = 2
let event_uid = 3

let init () : A.t =
  let box = ref (Box.empty box_uid) in
  let pc = ref (PC.init PC.I.{uid=pc_uid; a=box}) in
  let event = ref (E.init event_uid pc) in
  A.init 0 (Owned box, (Owned pc, Owned event))

The Box abstract domain supports variables defined on interval domain (e.g. x >= 1 /\ x <= 10). The propagator completion PC equips Box with arbitrary arithmetic constraints (e.g. x * x + y - 2 > 0). The domain Event_loop is a meta abstract domain, not supporting any constraint, but useful to compute efficiently the fixpoint of the closure operator of PC. These three domains are next assembled together in a direct product A. A is only the type of the abstract domain, thus we need to create an element of A, which is realized by the function init. Each abstract element has a dedicated unique identifier (UID), that is useful in the next section.

Type the formula in the abstract domain

We now have the formula on one side, and the abstract domain on the other side. The remaining step is to assign each part of the formula to the UID of an abstract element. We call this process typing.


# #![allow(unused_variables)]
#fn main() {
(** Given a formula, we assign each of its variables and constraints to an abstract domain.
   For the abstract domain `A`, all the variables and interval constraint go into `Box`, and the rest go into `PC`. *)
let type_formula formula =
  let rec aux' = function
    (* Domain constraint go into `Box`. *)
    | Cmp (Var x, LEQ, Cst (i,ty)) -> box_uid, TCmp(Var x, LEQ, Cst (i,ty))
    | Cmp (Var x, GEQ, Cst (i,ty)) -> box_uid, TCmp(Var x, GEQ, Cst (i,ty))
    (* Other constraint go into `PC`. *)
    | Cmp c -> pc_uid, TCmp c
    | And (f1,f2) -> pc_uid, TAnd(aux' f1, aux' f2)
    (* All other constraint are not supported in this abstract domain. *)
    | _ -> raise (Wrong_modelling "Constraint not supported by the current abstract domain.") in
  let rec aux = function
    | Exists(name, ty, qf) -> TExists({name; ty; uid=box_uid}, aux qf)
    | QFFormula f -> TQFFormula (aux' f) in
  aux formula
#}

The unary constraint of the variable are directly typed into box_uid, while the others are typed into pc_uid. The abstract domain A does support any constraint (for instance disjunction), thus we throw an error if we encounter an unsupported constraint.

Finding a solution

Now that we have a typed formula, the remaining and most important step is to find a solution to this formula. A solution is an assignment of the variable such that all constraints are satisfied. We first instantiate the solver with the created abstract domain.

module Solver = Fixpoint.Solver.Make(A)
module T = Solver.T

The next function is the main and last one, it assembles all the previous step and call the solver on the formula:


# #![allow(unused_variables)]
#fn main() {
let _ =
  let nqueens = 8 in
  try
    (* (1) Merge the three previous steps. *)
    let qformula = make_nqueens nqueens in
    let a = init () in
    let tf = type_formula qformula in
    (* (1b) Print the typed formula (optional). *)
    print_tformula a tf;
    (* (2) Interpret the typed formula into the abstract element. *)
    let a, cs = A.interpret a Exact tf in
    (* (3) Add the interpreted constraints into the abstract element. *)
    let a = List.fold_left A.weak_incremental_closure a cs in
    (* (4) Create the solver. *)
    let bound_sol = T.BoundSolutions 1 in
    let transformer = T.init a [bound_sol] in
    (* (5) Run the solver. *)
    let (gs,_) =
      try Solver.solve transformer
      with Solver.T.StopSearch t -> t in
    if T.(gs.stats.sols) > 0 then
      print_solution nqueens gs
    else
      Printf.printf "No solution found."
  with e ->
    Printf.printf "Exception: %s\n" (Printexc.to_string e)

#}

The project can be compiled and run as follows:

git clone https://github.com/ptal/absolute-demo.git
cd absolute-demo
make
./_build/install/default/bin/nqueens

You can change the variable nqueens to decrease or increase the size of the chess board. Check for instance with nqueens = 3 and nqueens = 4, and verify the solution on paper.

Many other features are available, and you can explore them in the developer documentation

Benchmarking

In this chapter, we present kobe, a constraint benchmarking tools suite for various solvers including GeCode, Chuffed and AbSolute. The goals of this benchmarking tools suite are:

  1. Reproducibility of the research results.
  2. Report of the full results whereas in research papers it is often summarized.
  3. Automate benchmarking of different solvers on different problems.
  4. Analysis of the benchmarking results.

This work is on-going and replicability problems might still be present. In case of problems, please do not hesitate to contact us on the issues tracker or by email.

Getting started

First, make sure you followed the instruction of the general getting started section. If so, you can install kobe through opam:

opam install kobe

You will have three new binaries in your path: kobegen, kobe and kobeview.

Sheet cheats

This gives you a list of the valid values for some entries of the benchmarking configuration file (explained below).

  • Families of problems supported (directory in the JSON entry problem_sets.path): rcpsp, rcpsp-max.
  • Solvers kind (solvers_kind): see function run_bench in kobe.ml.
  • AbSolute domains (solvers_kind.AbSolute.domains): see function bench_absolute in kobe.ml.
  • Box splitting strategies (solvers_kind.AbSolute.domains.strategies): see function make_box_strategy in kobe.ml.
  • Octagon splitting strategies (solvers_kind.AbSolute.domains.strategies): see function make_octagon_strategy in kobe.ml.

Step by step tutorial

We go through the necessary steps to design and perform benchmarks. If you want to try these steps out by yourself, a full example project is provided. Here are the steps explained in this section:

  1. Gather data sets
  2. Create the benchmark configuration file
  3. Generate bench instances
  4. Perform the benchmark
  5. View and analyse the results

1. Gather data sets

Data sets must be contained in a directory with a specific structures, for example:

data
  rcpsp
    j30.sm
      optimum/optimum.csv
      j301_1.sm
      j301_2.sm
    patterson.rcp
      optimum/optimum.csv
      pat1.rcp
      pat2.rcp
  rcpsp-max
    sm_j10
      optimum/optimum.csv
      PSP1.SCH
      PSP2.SCH
      PSP3.SCH

In this data directory, we have two families of problems namely rcpsp/ and rcpsp-max/. A problem is a directory containing one or more data sets of the same underlying problem. A data set is a directory containing two things:

  1. A file optimum/optimum.csv or solution/solution.csv listing the name of each data file along with their optimum values (in case of an optimization problem) or their satisfiability (in case of a satisfaction problem).
  2. A list of data instance files, for example j301_1.sm, j301_2.sm,...

Once we have these data, we can proceed to the description of our benchmarks. Sample data are available in the directory data.

2. Create the benchmark configuration file

The benchmarks to perform are described in a single JSON configuration file. For the sake of example, we will benchmark with AbSolute because it is already available to us (for GeCode and Chuffed, see MiniZinc Benchmarking). Let's have a look at the configuration file example/scheduling-data.json:

{
  "bench_exec": "./exec.sh",
  "input_dir" : "data/",
  "output_dir" : "database/",
  "problem_sets" : [
    { "path": "rcpsp/patterson.rcp/",
      "timeout": 10 },
    { "path": "rcpsp-max/sm_j10/",
      "timeout": 10 }
  ],
  "solvers_kind": [
    <AbSolute: {
      "domains": [
        { "name": "Octagon", "strategies": ["MSLF_simple", "Max_min_LB", "Min_max_LB"] },
        { "name": "Box", "strategies": ["First_fail_LB", "MSLF_simple"] }
      ]
    }>],
  "csv" : {
    "fields" : [<ProblemName>, <Nodes>, <Solutions>, <Fails>, <Time : <Sec>>, <Optimum>]
  },
  "solvers_config": [
    { "name": "absolute",
      "version": "v0.4.0",
      "exec": "",
      "globals": "" }
  ]
}

Warning: be careful of trailing comma in lists such as at the end of "MSLF_simple"] }, it generates errors when reading the file.

Here a description of each entry:

  • bench_exec: see next section
  • input_dir: Directory containing all the data sets for each family of problem.
  • output_dir: Directory of the result database: where should the results be stored? This directory is created automatically if it does not yet exist.
  • problem_sets is the list of every data set, where:
    • path is a path of the form <family>/<data-sets> where family must be supported by kobe (see Sheet cheats). The data sets are contained in input_dir/path.
    • timeout is the maximal number of seconds for solving a data instance before it is stopped.
  • solvers_kind contains the configuration of each solver that we must bench.
    • AbSolute: Every AbSolute configuration requires two things: a domain and its associated search strategy (see Sheet cheats).
  • csv lists the entries that must appear in the resulting file, we explain these below.
  • solvers_config is the list of the solvers needed with several information.
    • name and version are used in the name of the produced files.
    • Note that kobe does not checked if the version of AbSolute matches the currently installed OPAM version.

You can adapt each entry depending on your needs, please see the section Sheet cheats for supported values.

3. Generate bench instances

Once we have the benchmarking configuration file ready, we prepare the benchmarking session by generating one file per benchmark instance.

$ kobegen scheduling-benchmark.json
10 bench files generated.
./exec.sh database/rcpsp/patterson.rcp/absolute-v0.4.0/Octagon-MSLF_simple.json database/rcpsp/patterson.rcp/absolute-v0.4.0/Octagon-MSLF_simple.csv
./exec.sh database/rcpsp/patterson.rcp/absolute-v0.4.0/Octagon-Max_min_LB.json database/rcpsp/patterson.rcp/absolute-v0.4.0/Octagon-Max_min_LB.csvbenchmarking-tutorial.html#a4-perform-the-benchmark
./exec.sh database/rcpsp/patterson.rcp/absolute-v0.4.0/Octagon-Min_max_LB.json database/rcpsp/patterson.rcp/absolute-v0.4.0/Octagon-Min_max_LB.csv
./exec.sh database/rcpsp/patterson.rcp/absolute-v0.4.0/Box-First_fail_LB.json database/rcpsp/patterson.rcp/absolute-v0.4.0/Box-First_fail_LB.csv
./exec.sh database/rcpsp/patterson.rcp/absolute-v0.4.0/Box-MSLF_simple.json database/rcpsp/patterson.rcp/absolute-v0.4.0/Box-MSLF_simple.csv
./exec.sh database/rcpsp-max/sm_j10/absolute-v0.4.0/Octagon-MSLF_simple.json database/rcpsp-max/sm_j10/absolute-v0.4.0/Octagon-MSLF_simple.csv
./exec.sh database/rcpsp-max/sm_j10/absolute-v0.4.0/Octagon-Max_min_LB.json database/rcpsp-max/sm_j10/absolute-v0.4.0/Octagon-Max_min_LB.csv
./exec.sh database/rcpsp-max/sm_j10/absolute-v0.4.0/Octagon-Min_max_LB.json database/rcpsp-max/sm_j10/absolute-v0.4.0/Octagon-Min_max_LB.csv
./exec.sh database/rcpsp-max/sm_j10/absolute-v0.4.0/Box-First_fail_LB.json database/rcpsp-max/sm_j10/absolute-v0.4.0/Box-First_fail_LB.csv
./exec.sh database/rcpsp-max/sm_j10/absolute-v0.4.0/Box-MSLF_simple.json database/rcpsp-max/sm_j10/absolute-v0.4.0/Box-MSLF_simple.csv

Basically, kobegen flattens the configuration file into many benchmark instance files, one per possible solver and data sets configuration. These files are stored in the output_dir directory. For conveniency, kobegen outputs a list of the benchmarking commands to execute next, for example:

./exec.sh database/rcpsp/patterson.rcp/absolute-v0.4.0/Octagon-MSLF_simple.json database/rcpsp/patterson.rcp/absolute-v0.4.0/Octagon-MSLF_simple.csv

./exec.sh is a user-defined script specified in the field bench_exec of the configuration file, and it takes two arguments:

  • .../Octagon-MSLF_simple.json which is the flatten configuration file.
  • .../Octagon-MSLF_simple.csv which is the name of the result (this file is not yet created since the benches have not yet been performed).
  • These two files are together in the database so we know exactly on what configuration a benchmark data set has been solved.

kobegen does not execute exec.sh, it just outputs the commands to execute. It is particularly useful for launching benchmarks in parallel on a server, exec.sh can be used to launch a job.

Here exec.sh contains the single command kobe $1 | tee $2 where kobe is the program explained next.

4. Perform the benchmark

Almost everything is settled, only the commands output by kobegen need to be run. The results are stored in a CSV format which looks as follows:

problem, nodes, solutions, fails, time, optimum
[Warning] subdirectory data/rcpsp-max/sm_j10/optimum ignored.
PSP1.SCH, 85, 4, 39, 0.10s, 26
PSP2.SCH, 1, 0, 1, 0.01s, unsat
PSP3.SCH, 5107, 6, 2546, timeout, 45

Note that the warning is only output on the terminal and not in the CSV result file.

The field csv specified in the configuration file is directly reflected on the information printed here. Here is some additional information on this output format:

  • The number of Nodes, Solutions and Fails.
  • The solving time in second or timeout.
  • optimum contains either the latest optimum value found, none if no optimum value was found before the timeout, or unsat if the problem was proven unsatisfiable.

Note that a combination of timeout with an optimum value means that the solver could not perform an optimality proof.

The benchmarking results can be shared by creating a pull request on the kobe-database repository. You can also explore the results that are already there!

5. View and analyse the results

In the simple example of this tutorial, we already have 10 different files which can be cumbersome to examine by hand. We provide kobeview to view and analyse the results stored in the database directory.

kobeview database > view_database.json

Then, you can go to the kobe analysis page or open the local src/kobe-view/viewer/kobe.html page, and load the file view_database.json into the website. Note that all analysis are done locally in Javascript, so this website can be used offline. You can click on "Time step" to have several graphs on our benchmarks ("instance inclusion" and "cactus plot" only work when there are several solvers, this should be fixed).

A limited number of analysis are available currently, but you can participate by contributing to kobeview, or create your own program since the database contains only CSV and JSON files.

Supporting a new dataset

In case kobe does not support your problem, new contributions are welcomed! We describe here the basic steps to add a new format and problem specification to kobe, which involves modifying its code. To illustrate the necessary steps, we will pretend we add benchmarking support for the jobshop problem:

Formatting the data set

Format the data instances of your problem (see also "Gather data sets" section above). We have one github repository per problem's family (e.g. scheduling or sat). The data in these data sets are usually classified according to the origin of the instances (e.g. "Taillard instances") and the problem's kind (e.g. RCPSP or RCPSP/max). Each data directory contains the set of instances, and an additional directory named either solution for satisfiability problems or optimum for optimization problems.

Parsing the data

The first step is to parse the data instances into an OCaml structure specific to your problem. To achieve that, in kobe/parsers, select the sub-directory according to the category of your problem or create one, and create jobshop_jss.ml and jobshop_data.ml where the first is the parser and the second the data structure parsed. Usually, the format follows a similar pattern so you can have a look in existing files to get started. Don't forget to also create the corresponding .mli and to add some comments in the code and in this manual (e.g. Scheduling data sets). Please cite your sources (papers and origin of data).

In the file dispatch.ml add a variant to problem (e.g. JOBSHOP of jobshop), and redirect the file to the right parser in the function dispatch according to the file extension.

Create the model

Once we parsed our data into a suited structure, it is time to create the constraints model. We create a file models/jobshop.ml with a single function formula_of_jobshop: jobshop -> bab_qformula. bab_qformula is a representation of a formula where a value must be minimized or maximized. It also works for satisfiability as well. You can look to existing models for examples. Similarly to dispatch.ml in the previous section, you must register your new function in models.ml.

If you want to support MiniZinc model, you need to convert the data parsed in the previous section to a DZN file that will be feed to your MiniZinc model. To achieve that, create a file generators/jobshop2dzn.ml (replace jobshop by the name of your problem), and register this function in mzn.ml.

Running the new model

That should be it! You can now try and run your new model with various solvers (GeCode, Chuffed, AbSolute). Check if the obtained results are right with our analyzer tool kobeview.

MiniZinc Benchmarking

Scheduling Data Sets

kobe parses different file format describing scheduling problems such as RCPSP and its variants. We give a list of the supported formats and data sets available in kobe-rcpsp, and credit the owners of these data sets. We have clean these data sets of all the meta-information that was uneccessary to us, but feel free to consult the references given for the full data set.

We formatted known optimum results in a CSV format that is common to every data set (e.g. data/rcpsp/j30.sm/optimum/optimum.csv).

RCPSP

Patterson (.rcp)

Format originally used to evaluate RCPSP in Patterson (1984). It is the format used by the RanGen1 and RanGen2 generators.

Data sets:

  1. Patterson: 110 instances.
  2. RG300[1]: 480 instances, 300 activities, 4 resources.
  3. RG30[2]: 900, 180, 240, 240, 240 instances, 30 activities, 4 resources. Resource data a bit "restricted" (in the way it was generated) on the RG30 sets.
  4. MT[3]: 900, 800, 1200, 1200 instances, 30 activities, 0 resource. ResSet[4]: Can be merged with MT for resources specifications.

[1] Debels, Dieter and Vanhoucke, Mario. "A Decomposition-Based Genetic Algorithm for the Resource-Constrained Project-Scheduling Problem", 2007.

[2] Mario Vanhoucke, José Coelho, Dieter Debels, Broos Maenhout, Luís V. Tavares, "An evaluation of the adequacy of project network generators with systematically sampled networks", European Journal of Operational Research, Volume 187, Issue 2, 2008.

[3] Vanhoucke, M., 2010, "Measuring time: Improving project performance using earned value management" 1st ed., Springer: 164 pages.

[4] Testing and validating algorithms for resource-constrained project scheduling (in submission).

SM format (.sm)

Format of the PSPLIB, a well-known library for RCPSP with sets of instances of 30, 60, 90 and 120 activities.

Data sets:

  1. j30: 480 instances, 30 activities and 4 resources.
  2. j60: 480 instances, 60 activities and 4 resources.
  3. j90: 480 instances, 90 activities and 4 resources.
  4. j120: 600 instances, 120 activities and 4 resources.

R. Kolisch and A. Sprecher, “PSPLIB-a project scheduling problem library: OR software-ORSEP operations research software exchange program,” European journal of operational research, vol. 96, no. 1, pp. 205–216, 1997.

http://www.om-db.wi.tum.de/psplib/getdata.cgi?mode=sm

RCPSP/max

ProGenMax format (.SCH):

Format of the sets of instances produced by the ProGenMax instance generator. It concerns the RCPSP/max problem.

Data sets:

  1. C, D: each 540 instances, 100 activities, 5 resources.
  2. UBO_N: each 90 instances, N activities, 5 resources. N in 10, 20, 50, 100, 200, 500 and 1000.
  3. J_N: 270 instances, N activities, 5 resources. N in 10, 20, 30.

http://www.wiwi.tu-clausthal.de/de/abteilungen/produktion/forschung/schwerpunkte/project-generator/rcpspmax/

MMRCPSP (multi-mode RCPSP)

SM format (.sm)

Similar to RCPSP. With nonrenewable resources.

Data sets:

  1. J_N: 3840 instances in total, N activities, N in 10, 12, 14, 16, 18, 20, 30

SM format light (.sm)

Similar to SM format but without some headers.

Data sets:

  1. MMLIB50: 540 instances, 50 activities, 3 modes.
  2. MMLIB100: 540 instances, 100 activities, 3 modes.
  3. MMLIB+: 3240 instances, 50 or 100 activities, 3, 6 or 9 modes.

[1] V. Van Peteghem and M. Vanhoucke, “An experimental investigation of metaheuristics for the multi-mode resource-constrained project scheduling problem on new dataset instances,” European Journal of Operational Research, vol. 235, no. 1, pp. 62–72, May 2014.

Job shop scheduling problem

The data sets and optimum are extracted and formatted from the following sources:

  1. optimizizer.com
  2. Yasumasa Tamura's Github

Data sets:

  1. ABZ: 5 instances, due to Adams et al. [1].
  2. FT: 3 instances, due to Fisher and Thompson [2].
  3. LA: 40 instances due to Lawrence [3].
  4. ORB: 10 instances due to Applegate and Cook [4].
  5. SWV: 20 instances due to Storer et al. [5].
  6. YN: 4 instances due to Yamada and Nakano [6].
  7. TA: 80 instances due to Taillard [7].
  8. Demirkol: 80 instances due to Demirkol et al. [8].
  9. CAR: 8 instances due to Carlier and Pinson [9].

All data sets are formatted according to the following format:

# Comments start with '#'
<jobs> <machines>
Each line has N operations where an operation is a couple <machine_idx> <duration>.

[1] J. Adams, E. Balas, D. Zawack. "The shifting bottleneck procedure for job shop scheduling.", Management Science, Vol. 34, Issue 3, pp. 391-401, 1988.

[2] J.F. Muth, G.L. Thompson. "Industrial scheduling.", Englewood Cliffs, NJ, Prentice-Hall, 1963.

[3] S. Lawrence. "Resource constrained project scheduling: an experimental investigation of heuristic scheduling techniques (Supplement).", Graduate School of Industrial Administration. Pittsburgh, Pennsylvania, Carnegie-Mellon University, 1984.

[4] D. Applegate, W. Cook. "A computational study of job-shop scheduling.", ORSA Journal on Computer, Vol. 3, Isuue 2, pp. 149-156, 1991.

[5] R.H. Storer, S.D. Wu, R. Vaccari. "New search spaces for sequencing problems with applications to job-shop scheduling.", Management Science Vol. 38, Issue 10, pp. 1495-1509, 1992.

[6] T. Yamada, R. Nakano. "A genetic algorithm applicable to large-scale job-shop problems.", Proceedings of the Second international workshop on parallel problem solving from Nature (PPSN'2). Brussels (Belgium), pp. 281-290, 1992.

[7] E. Taillard. "Benchmarks for basic scheduling problems", European Journal of Operational Research, Vol. 64, Issue 2, pp. 278-285, 1993.

[8] Ebru Demirkol, Sanjay Mehta, Reha Uzsoy. "Benchmarks for shop scheduling problems", European Journal of Operational Research, 109(1), 1998, pp. 137-141.

[9] J. Carlier and E. Pinson. An Algorithm for Solving the Job-Shop Problem. Management Science, 35(2):164–176, 1989.

Flexible job shop scheduling problem

The information about the data sets have been taken from [5], and optimum bounds from [5,6].

Data sets:

  1. Barnes: 21 instances, due to Chambers and Barnes [1].
  2. Brandimarte: 10 instances, due to Brandimarte [2].
  3. Dauzere: 18 instances, due to Dauzère-Pérès and Paulli [3].
  4. Hurink: 3 data sets of 66 instances each modifying the ABZ, FT, LA, ORB data sets of the job shop scheduling, due to Hurink [4].
    1. edata: Few operations may be assigned to more than one machine.
    2. rdata: Most of the operations may be assigned to some machines.
    3. vdata: All operations may be assigned to several machines.

All data sets have the following format (taken from the file DataSetExplanation.txt coming with the benchmarks):

  1. In the first line there are (at least) 2 numbers: the first is the number of jobs and the second the number of machines (the 3rd is not necessary, it is the average number of machines per operation)
  2. Every row represents one job: the first number is the number of operations of that job, the second number (let's say k>=1) is the number of machines that can process the first operation; then according to k, there are k pairs of numbers (machine,processing time) that specify which are the machines and the processing times; then the data for the second operation and so on...

[1] J. B. Chambers and J. W. Barnes. Flexible Job Shop Scheduling by Tabu Search. The University of Texas, Austin, TX, Technical Report Series ORP96-09, Graduate Program in Operations Research and Industrial Engineering, 1996.

[2] P. Brandimarte. Routing and Scheduling in a Flexible Job Shop by Tabu Search. Annals of Operations Research, 41(3):157–183, 1993.

[3] S. Dauzère-Pérès and J. Paulli. Solving the General Multiprocessor Job-Shop Scheduling Problem. Technical report, Rotterdam School of Management, Erasmus Universiteit Rotterdam, 1994.

[4] J. Hurink, B. Jurisch, and M. Thole, “Tabu search for the job-shop scheduling problem with multi-purpose machines,” Operations-Research-Spektrum, vol. 15, no. 4, pp. 205–215, 1994.

[5] Behnke, D., & Geiger, M. J. (2012). Test instances for the flexible job shop scheduling problem with work centers. Arbeitspapier/Research Paper/Helmut-Schmidt-Universität, Lehrstuhl für Betriebswirtschaftslehre, insbes. Logistik-Management.

[6] A. Schutt, T. Feydy, and P. J. Stuckey, “Scheduling Optional Tasks with Explanation,” in Principles and Practice of Constraint Programming, Berlin, Heidelberg, 2013, vol. 8124, pp. 628–644.

SAT Data Sets

Contributing to AbSolute

This guide is for anybody who wants to contribute to AbSolute. Before all, we encourage you to discuss your project with us to be sure your contributions can eventually be integrated. Make sure you installed OCaml and Opam.

Installing and building AbSolute

Install dependencies, clone and build AbSolute from the Github repository:

opam install dune extlib containers mlgmpidl mtime alcotest
git clone https://github.com/ptal/AbSolute
cd AbSolute
make build
make test

You might also be interested by cloning the following repositories which are all relevant to the AbSolute project:

git clone https://github.com/ptal/kobe
git clone https://github.com/ptal/kobe-rcpsp
git clone https://github.com/ptal/kobe-sat
git clone https://github.com/ptal/kobe-database
git clone https://github.com/ptal/minisatml
git clone https://github.com/ptal/solvers-opam
git clone https://github.com/ptal/ptal.github.io

Development flow

You should first create a branch or a fork, add and test the new feature, and then create a pull request to merge it into the mainstream repository. In a more detailed way:

  1. Make your changes in a branch.
  2. Document the .mli files of your code.
  3. Verify it passes all the tests, and that Travis succeeds to build and test your branch.
  4. Send your changes into master.
  5. Publish the new version (see below).

If you modify interfaces that are shared by several abstract domains (such as abstract_domain.mli), you should create an issue on Github to discuss about it with the team, since it impacts virtually all the project. It is important to try to keep the number of functions in the signature minimal. If you notice a function is not used anymore, you should not hesitate to remove it; but please contact us to validate your hypothesis so you don't lose time trying to remove it.

About the coding guidelines, you should try to code in the same way than the rest of the project, please observe how we code and do the same. One very important rule is that you indent using 2 spaces and not tabulations, and create .ml and its documented .mli companion for every new file.

In any case, do not hesitate to communicate and ask questions; stupid questions do not exist :-). Questions and discussions are very important so you do not lose time during development, and do not go in a wrong direction.

Publishing

We maintain version numbers according to semantic versioning rules. We provide a script publish.sh to automatically update the version of your OPAM project, create the tag on git, add the new version in the solvers-opam repository, and push all the changes online. It can be used as follows:

./publish.sh major ../AbSolute ../solvers-opam

It releases a new major version of the project AbSolute rooted at ../AbSolute, and where the OPAM packages repository is rooted at ../solvers-opam. The new version is installable with opam update followed by opam install absolute.

For more usage information, you can type:

./publish.sh

This script is only compatible to Linux (not yet tested on MacOSX), so if you are on Windows, you should read this script (which is documented) to know what are the necessary steps.

Local version of AbSolute in OPAM

It is often convenient to have a local version of AbSolute in OPAM, for instance when moving from kobe to AbSolute so we can detect bugs before committing and publishing a new version. We first pin a local version of AbSolute:

# Remove the currently installed version of AbSolute (not sure if it is necessary though).
opam remove absolute
# Pin a local version of AbSolute where `AbSolute` is the name of the git repository.
opam pin add absolute AbSolute

The next step is to edit the OPAM configuration file of AbSolute in order to avoid being synchronized with git (since we want to work solely on the local copy). Write opam pin edit absolute in your terminal, and change the following line:

url {
  src:
    "git+file:///home/ptalbot/repositories/absolute-project/AbSolute#master"
}

by removing the git+ part of the URL:

url {
  src:
    "file:///home/ptalbot/repositories/absolute-project/AbSolute#master"
}

Each time you make some changes and want to use the newest local version of AbSolute just type:

opam upgrade .

in the repository of AbSolute.

Research

AbSolute has generated several research papers. Currently, the most general one to cite is:

@inproceedings{DBLP:conf/vmcai/PelleauMTB13,
  author    = {Marie Pelleau and
               Antoine Min{\'{e}} and
               Charlotte Truchet and
               Fr{\'{e}}d{\'{e}}ric Benhamou},
  title     = {A Constraint Solver Based on Abstract Domains},
  booktitle = {Verification, Model Checking, and Abstract Interpretation, 14th International
               Conference, {VMCAI} 2013, Rome, Italy, January 20-22, 2013. Proceedings},
  pages     = {434--454},
  year      = {2013},
  crossref  = {DBLP:conf/vmcai/2013},
  url       = {https://doi.org/10.1007/978-3-642-35873-9\_26},
  doi       = {10.1007/978-3-642-35873-9\_26},
  timestamp = {Wed, 24 May 2017 08:30:31 +0200},
  biburl    = {https://dblp.org/rec/bib/conf/vmcai/PelleauMTB13},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Changelog

We describe the main changes operated in each version of AbSolute. This is especially useful when analyzing the results in kobe-database.

Version 0.10.0:

  • Add a function has_changed in Abstract_domain to detect fixed point of the closure operator more precisely.
  • Fix bugs in CascadeProduct.

Version 0.9.2: Fix a bugs in replace_uid and instantiate_vars where constraints were wrongly rewritten.

Version 0.9.1: Rewrite the solve function in CPS to avoid possible stack overflow.

Version 0.9.0:

  • Add the domain product Delayed_product which transfers a constraint to a more specialized domain when instantiated enough.
  • Propagator_completion now automatically exchanges equalities over its sub-domains.
  • Add a minimalist search strategy language.

Version 0.8.0:

  • Add an embed function which is the inverse of project in abstract domain, it increases the efficiency of the HC4 algorithm.
  • Fix the min/max filtering in interval.
  • New converter between bounds to avoid always going through rational when converting a bound.

Version 0.7.0:

  • Add the propagator completion domain transformer, which equips any projection-based abstract domain with propagators.
  • The box domain does not contain anymore its constraints.
  • Fix a bug when selecting the new bounds in the branch-and-bound algorithm.

Version 0.6.0:

  • New architecture of products based on abstract domain sharing.
  • Typing of the logic formula according to the type of an abstract domain.
  • Note: this version was bugged due to a problem in the BAB algorithm (fixed in 0.7.0).