Data and Figures for “Regime-Aware Adaptive Security Policy Management for Smart Environments under Attacker Learning”
Description
This dataset supports the manuscript “Regime-Aware Adaptive Security Policy Management for Smart Environments under Attacker Learning.” The package contains aggregate simulation outputs, experiment configuration files, regime catalogues, transfer and drift catalogues, a run manifest, representative episode-level outputs, and figure files used to support the main manuscript and supplementary material. The study evaluates four defender policy families, namely commitment, Nash, reactive, and static, under a static foundation layer and three attacker-learning conditions: stationary within-regime learning, cross-regime transfer, and drifting-regime adaptation. The attacker is modelled as a discrete, partially observable, recency-weighted softmax Q-learning agent. The evaluation reports security and operational outcomes, including effective risk, defender utility, attack intensity, control intensity, latency burden, energy burden, switching burden, policy stability, attacker reward, attacker abstention rate, attacker entropy, convergence episode, transfer degradation, and drift recovery time. The dataset is intended to support verification of the reported empirical results, inspection of the figure evidence, and reuse of the aggregate simulation outputs. The CSV files can be opened with Python, R, Excel, LibreOffice Calc, or other statistical software. Figures are provided in PNG and PDF formats where available. The current package contains data, configuration files, catalogues, representative outputs, and figures; it does not include a full executable simulation codebase.