Server Scheduling Benchmark Instances
Description of this data
Data for job scheduling in a server.
The data is divided into 5 ZIP files. Each zip file contains a collection of text files, where each file contains the information of all jobs arriving on a day to the server.
The text file structure is as follows:
<list of CPU cycles required>
<list of priority weights>
<list of release dates>
<list of precedence constraints in pars of the form [parent, child]>
The information in p, w, and r, follow the format of dictionaries in Python (job ID: information), whereas pr has the format of a Python list.
The "results.xls" file, has 10 Excel sheets, with the best lower bound, and upper bound (i.e. schedule value) known for that instance. There are two sheets for each set of instances, one with the results considering release dates (ends in "wr") and one with the results without considering release dates (ends in "nr"). In all cases, the schedule was evaluated as total completion time, plus total energy consumption as described in "Resource Cost Aware Scheduling" by Carrasco, Iyengar, and Stein (https://doi.org/10.1016/j.ejor.2018.02.059).
Experiment data files
Best known lower bound (LB) and upper bound (UB - an actual schedule) for each instance. The value of a schedule is: total completion time plus energy consumption, where energy consumption for job i, at speed s_i is given by (p_i/s_i) * s_i^3
188 instances - between 486 and 84654 jobs.
112 instances - between 36 and 109 jobs.
125 instances - between 4 and 13 jobs
109 instances - between 14 and 35 jobs.
123 instances - between 110 and 485 jobs.
This data is associated with the following publication:
Cite this dataset
Carrasco, Rodrigo A. (2018), “Server Scheduling Benchmark Instances”, Mendeley Data, v1 http://dx.doi.org/10.17632/ph95d337dj.1
The files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International licence.