Contributors:Jorrit Poelen, Tobias Kuhn, Katrin Leinweber
upgrade to globi libs v0.16.1 to address https://github.com/globalbioticinteractions/globalbioticinteractions/issues/461 (see https://github.com/globalbioticinteractions/globalbioticinteractions/releases/tag/v0.16.1)
Contributors:Jon Love, Jon Palmer, Jason Stajich, Tyler Esser, Erik Kastman, Daniel Bogema, David Winter
Okay, so apparently I lied about bug fixes before py3 release. Here are some quick ones.
Also -- some users have said that conda recipe fails to find a solution, with the help of @reslp this is perhaps caused by ete3 package as a dependency. Since ete3 is only used in funannotate for the dN/dS calculation in funannotate compare, I'm going to remove it from conda recipe as a dependency. Users that would like to continue using the dN/dS function in compare will need to manually install ete3 (ie conda could work on your system or pip, etc).
update the log file copy/remove in funannotate annotate which causes a problem in singularity container
encoding bug fixes for some PFAM results
fix for funannotate iprscan which was not correctly combining XML files with newer versions of Interproscan.
TephraProb v1.7 includes access to ERA-5 wind data, a different sampling of masses and some bug fixes.
Changes in TephraProb v1.7
ERA-5 is now available for download. It follows the same procedure as ERA-Interim (refer to manual), but it uses a different Python API (cdsapi) - which also requires a new key.
Create a new account on the CDS website
Retrieve your UID and key and format it as such (where 1234 is your UID and the string following the column is the API key):url: https://cds.climate.copernicus.eu/api/v2
Enter this key in TephraProb under Input > Wind > Set ERA-5 API key
Install the cdsapi in TephraProb using Input > Wind > Install ERA-5 libraries
Note it is only required to perform steps 4-5 once.
Relevant ERA-5 documentation
Reference page for ERA-5
Good introduction to ERA-5 on retostauffer.org.
Introduction to CDS API
Loading wind datasets
It is now possible to load existing datasets in the Input > Wind > Download Wind module.
Logarthmic sampling of mass
For subplinian/Plinian scenarios, the sampling of the eruption mass when constrain=0 was linked to the distribution chosen for the plume height. A new variable mass_sample was introduced to control the shape of the distribution from which the mass is sampled:
mass_sample=0: the mass is sampled between min_mass and max_mass
mass_sample=1: the mass is sampled between log10(min_mass) and log10(max_mass)
Linking outputs to original ESPs
A new scheme was introduced to be able to link the content of the files dataProb.mat and dataT2_*.mat located in RUNS/runName/runNb/DATA/. This is explained in more details in
Fixed a bug on the sampling wind profiles when seasonality=1
Contributors:Beyer, Dirk, Wendler, Philipp
TACAS'20 Artifact for CPU Energy Meter
This artifact for our TACAS'20 paper
"CPU Energy Meter: A Tool for Energy-Aware Algorithms Engineering"
consists of the tool CPU Energy Meter (https://github.com/sosy-lab/cpu-energy-meter)
which is available under the BSD 2-clause license.
Furthermore, the tool BenchExec (https://github.com/sosy-lab/benchexec),
which is available under the Apache 2.0 license,
is included to demonstrate the integration of CPU Energy Meter into BenchExec.
The tools CPU Energy Meter and BenchExec are also archived by themselves
Verification algorithms are among the most resource-intensive computation tasks.
Saving energy is important for our living environment and
to save cost in data centers.
Yet, researchers compare the efficiency of algorithms
still in terms of consumption of CPU time (or even wall time).
Perhaps one reason for this is that measuring energy consumption of computational
processes is not as convenient as measuring the consumed time
and there is no sufficient tool support.
To close this gap, we contribute CPU Energy Meter,
a small tool that takes care of reading the energy values
that Intel CPUs track inside the chip.
In order to make energy measurements as easy as possible,
we integrated CPU Energy Meter into BenchExec,
a benchmarking tool that is already used by many researchers
and competitions in the domain of formal methods.
As evidence for usefulness, we explored the energy consumption of some state-of-the-art verifiers
and report some interesting insights, for example, that
energy consumption is not necessarily correlated with CPU time.
CPU Energy Meter requires an Intel CPU with the RAPL feature,
which is present in x86 CPUs since the Sandy Bridge generation
(e.g., Core i3/5/7 2000 and above or Xeon E3/E5/E7).
Furthermore, the tool works only on a real machine (*no VM* and no container)
because it needs direct access to the so-called model-specific registers (MSRs)
of the CPU.
To access these registers, it requires a Linux kernel with the module "msr"
(this module should be available in standard distribution kernels).
The tool was tested on 64-bit Linux systems.
Furthermore, for installation, root access is required
(for example to activate the mentioned kernel module).
For a Debian-based distribution (e.g., Ubuntu),
it is recommended to install the Debian packages (cf. TACAS20-cpu-energy-meter/deb/INSTALL.md).
Otherwise, please follow the instructions for installing from source
Note that in both cases, CPU Energy Meter can be installed either alone
(for experimenting with this tool alone)
or in addition to BenchExec.
If CPU Energy Meter is correctly installed, the tool can be executed with "cpu-energy-meter".
It will produce output (showing the consumed energy while it was running)
only after Ctrl+C is pressed to terminate it.
A mode with debug output is available via "cpu-energy-meter -d".
The installation attempts to configure the host system such that CPU Energy Meter
is usable for all users.
If the tool does not work, please try executing it with root privileges: "sudo cpu-energy-meter".
Please also check that the kernel module "msr" is loaded with "lsmod | grep msr"
and that the appropriate device modules are present: "ls /dev/cpu/*/msr".
Use in Benchmarking
CPU Energy Meter is intended to be used primarily by benchmarking frameworks
instead of end users.
If BenchExec is installed, it is possible to benchmark a single tool execution
with "runexec", e.g., "runexec --read-only-dir / -- /bin/sh -c 'echo "scale=2000; a(1)*4" | bc -l'"
(this example calculates 2000 digits of pi,
of course different commands can be used).
The command "sudo sysctl -w kernel.unprivileged_userns_clone=1" might be necessary
for BenchExec on non-Ubuntu systems to configure the kernel appropriately.
If CPU Energy Meter is installed, BenchExec will automatically use it to measure the energy consumption
and will report values such as
The value "cpuenergy" is the sum of the "cpuenergy-pkg[0-9]+" values for all CPUs that were used.
The other values report the energy consumption of the various measurement domains
that the CPU(s) support (it depends on the CPU which values are available).
BenchExec also supports executing and measuring large sets of tool executions automatically.
This is independent of CPU Energy Meter and just requires the same steps
as benchmarking with BenchExec alone would.
As an example for those who are not familiar with BenchExec
we provide an example benchmark definition that calculates pi with 1000 to 10000 digits.
To execute this benchmark set, run "benchexec benchmark-example-calculatepi.xml".
BenchExec needs a proper setup of cgroups and uses some kernel features that might not be available on all systems.
If you have problems executing benchexec,
run "sudo sysctl -w kernel.unprivileged_userns_clone=1"
and "benchexec benchmark-example-calculatepi.xml --read-only-dir / -T -1 -M -1".
This will run BenchExec in a restricted mode with less measurements,
but measuring CPU energy will still work if cpu-energy-meter is properly installed.
In any case, to generate HTML and CSV tables with the results (including the energy measurements),
run "table-generator results/benchmark-example-calculatepi.*.results.xml.bz2 --all-columns --show" afterwards.
In order to use benchexec with other tools (e.g., verifiers),
a tool-specific module needs to be added to BenchExec
and an appropriate benchmark definition needs to be written.
This had been done for all participants of SV-COMP,
so it is possible to benchmark any participating verifier
using the verifier archives from https://gitlab.com/sosy-lab/sv-comp/archives-2019
and the benchmark definitions similar to those at https://github.com/sosy-lab/sv-comp/tree/svcomp19.
Reproducing Fig. 1
Fig. 1 was produced by measuring the energy consumption of an Intel Core i5-6440HQ CPU
in a Lenovo ThinkPad T460p for 10s.
This experiment can be replicated with the command
cpu-energy-meter & sleep 10s ; kill -INT %1
(reported domains and results will vary depending on the system).
Reproducing Table 1 and Figs. 2-4
The results for these figures where taken from the literature
(Dirk Beyer, "Automatic Verification of C and Java Programs: SV-COMP 2019", Proc. TACAS'19,
The raw data are available at
More information on replicating these results is in the above publication
and on the SV-COMP website (https://sv-comp.sosy-lab.org/2019/systems.php).
The SV-COMP benchmark set (https://github.com/sosy-lab/sv-benchmarks/tree/svcomp19),
all participating tools (https://gitlab.com/sosy-lab/sv-comp/archives-2019),
and the benchmark definitions (https://github.com/sosy-lab/sv-comp/tree/svcomp19)
are available online.
We provide baseline measurements (of an idle state)
of the same system on which SV-COMP'19 was executed
as a comparison for the data in Table 1 and Figs. 2-4:
Domain Avg. Idle Power (W)
PP0 (Core) 0.34
For the package domain, this is one order of magnitude less
than the average power when benchmarking CBMC or CPA-Seq.
Note that in Figs. 2 and 3 (which show the package domain)
the gray line towards the bottom of the plot
represents an average power of 1 W
and is thus roughly showing the idle energy consumption.
If we were to subtract the idle energy consumption from the energy measurements
in Table 1 and Figs. 2-4, the overall picture would not change
and the conclusions would remain the same.
Contributors:Ioannis Dimitris Apostolopoulos
Use this Python code to classify 2D lung nodule images of 32x32 into benign or malignant. The images come from either CT or PET/CT technology. You may use the publicly available dataset called LIDC-IDRI to check the performance of the CNNs. The initial code was developed to classify 167 single lung nodules from a private dataset of PET/CT images.
No description provided.
Contributors:Danny Parsons, Maxwell Fundi, Shadrack Kibet, Steven Ndung'u, John Lunalo, lilyclements, Steve Kogo, Polycarp Okock, Wycklife, Lazarus Kioko Muthenya, Alex Sananka, patowhiz, François Renaud, bethanclarke, David Stern, Ivanluv, Yvonne Achieng', MaryMutahi, JacklineKemboi, duutabib, jackykemboi, Marthafuraha Nicholaus, thushari93, Michael Clapham, lloyddewit, Evans, rdstern, jkmusyoka
A statistics software package powered by R
This release contains code, simulated data, and processed source data files for all analyses and all figures presented in:
Gostic, Gomez, Mummah, Kucharski & Lloyd-Smith. "Estimated effectiveness of symptom and risk screening to prevent the spread of COVID-19," eLife, 2020.
Last update 20-Feb-2020
Contributors:Pavle Boškoski, Dmytro Iatsenko, Gemma Lancaster, Sam McCormack, Julian Newman, Guru Vamsi Policharla, Valentina Ticcinelli, Tomislav Stankovski, Aneta Stefanovska
PyMODA is a Python implementation of MODA, a numerical toolbox developed by the Nonlinear & Biomedical Physics group at Lancaster University for analysing real-life time-series.
Algorithms developed by members of:
Nonlinear and Biomedical Physics Group, Physics Department, Lancaster
University, UK from 2006 until present.
Nonlinear Dynamics and Synergetic Group at the Faculty of Electrical
Engineering, University of Ljubljana, Slovenia from 1996 to 2006.
Aneta Stefanosvka extends her personal thanks to Aleš Založnik, and to her PhD students.