Skip to main content

Science of Computer Programming

ISSN: 0167-6423

Visit Journal website

Datasets associated with articles published in Science of Computer Programming

Filter Results
1970
2024
1970 2024
3 results
  • Data for: Universal (Meta-)Logical Reasoning: Recent Successes
    The authors universal (meta-)logical reasoning approach is demonstrated and discussed with a challenge puzzle in epistemic reasoning: the wise men puzzle. The presented solution puts a particular emphasis on the adequate modeling of common knowledge.
    • Dataset
  • Alloy4Fun Dataset for 2022/23
    This dataset contains models submitted by students in the Alloy4Fun platform to solve the challenge models from various editions of formal methods courses in the University of Minho (UM) and the University of Porto (UP) between the fall of 2019 and the spring of 2023, totalling about 100.000 entries. Participants include those enrolled in the optional MSc course "Specification and Modelling" (EM) and the mandatory MSc course "Formal Methods in Software Engineering" (MFES) in UM, and the optional MSc course "Formal Methods for Critical Systems" (MFS) in UP. Note that since the challenges' permalinks are publicly available, the dataset may contain submissions from other participants outside the classroom context. The analysis of the 2021 dataset is reported in the Science of Computer Programming paper "Experiences on Teaching Alloy with an Automated Assessment Platform" (extending the ABZ'20 conference version analysing the 2020 dataset). Name Permalink Courses (Students) Entries Trash FOL sDLK7uBCbgZon3znd EM 19/20 (~20) and 20/21 (~20), MFS 21/22 (~10) and 22/23 (~10) 4092 Classroom FOL YH3ANm7Y5Qe5dSYem EM 19/20 (~20) and 20/21 (~20), MFS 21/22 (~10) and 22/23 (~10) 5893 Trash RL PQAJE67kz8w5NWJuM EM 19/20 (~20) and 20/21 (~20) 4361 Classroom RL zRAn69AocpkmxXZnW EM 19/20 (~20) and 20/21 (~20) 6341 Graphs gAeD3MTGCCv8YNTaK EM 19/20 (~20) and 20/21 (~20) 3211 LTS zoEADeCW2b2suJB2k EM 19/20 (~20) and 20/21 (~20) 3382 Production line jyS8Bmceejj9pLbTW bNCCf9FMRZoxqobfX (v2) aTwuoJgesSd8hXXEP (v3) EM 19/20 (~20) and 20/21 (~20) MFES 21/22 (~200), MFS 21/22 (~10) and 22/23 (~10) MFES 22/23 (~200) 898 4903 3175 CV JC8Tij8o8GZb99gEJ WGdhwKZnCu7aKhXq9 (v2) EM 19/20 (~20) EM 20/21 (~20) 1199 393 Trash LTL 9jPK8KBWzjFmBx4Hb EM 19/20 (~20) and 20/21 (~20) 5279 Train Station FwCGymHmbqcziisH5 QxGnrFQnXPGh2Lh8C (v2) EM 20/21 (~20) MFES 21/22 (~200) and 22/23 (~200), MFS 21/22 (~10) and 22/23 (~10) 1264 8158 Courses PSqwzYAfW9dFAa9im JDKw8yJZF5fiP3jv3 (v2) MFES 21/22 (~200), MFS 21/22 (~10) and 22/23 (~10) MFES 22/23 (~200) 14884 7632 Social network dkZH6HJNQNLLDX6Aj MFES 21/22 (~200) and 22/23 (~200), MFS 21/22 (~10) and 22/23 (~10) 22690 Each entry of the dataset registers either an execution (which may have returned a result or an error) or the creation of a permalink for sharing, and contains: _id: the id of the interaction time: the timestamp of its creation derivationOf: the parent entry original: the first ancestor with secrets (always the same within an exercise) code: the complete code of the model (excluding the secrets defined in the original entry) (with student comments removed) sat: whether the command was satisfiable (counter-example found for checks), or -1 when error thrown [only for executions] cmd_i: the index of the executed command [only for executions] cmd_n: the name of the executed command [only for successful executions, i.e. no error thrown] cmd_c: whether the command was a check [only for successful executions, i.e. no error thrown] msg: the error or warning message [only for successful executions with warnings or when error thrown] theme: the visualisation theme [only for sharing entries] User comments were removed from the code to guarantee anonymization.
    • Dataset
  • Cost-effective Simulation-based Test Selection in Self-driving Cars Software
    Simulation environments are essential for the continuous development of complex cyber-physical systems such as self-driving cars (SDCs). Previous results on simulation-based testing for SDCs have shown that many automatically generated tests do not strongly contribute to the identification of SDC faults, hence do not contribute towards increasing the quality of SDCs. Because running such “uninformative” tests generally leads to a waste of computational resources and a drastic increase in the testing cost of SDCs, testers should avoid them. However, identifying “uninformative” tests before running them remains an open challenge. Hence, this paper proposes SDC-Scissor, a framework that leverages Machine Learning (ML) to identify SDC tests that are unlikely to detect faults in the SDC software under test, thus enabling testers to skip their execution and drastically increase the cost-effectiveness of simulation-based testing of SDCs software. Our evaluation concerning the usage of six ML models on two large datasets characterized by 22’652 tests showed that SDC-Scissor achieved a classification F1-score up to 96%. Moreover, our results show that SDC-Scissor outperformed a randomized baseline in identifying more failing tests per time unit.
    • Software/Code