Filter Results
31 results
Tillage is a central element in agricultural soil management and has direct and indirect effects on processes in the biosphere. Effects of agricultural soil management can be assessed by soil, crop, and ecosystem models but global assessments are hampered by lack of information on soil management systems. This study presents a classification of globally relevant tillage practices and a global spatially explicit data set on the distribution of tillage practices for around the year 2005. This source code complements the dataset on the global gridded tillage system mapping described in Porwollik et al. (2018, http://doi.org/10.5880/PIK.2018.012). It shall help interested people in understanding the findings on the global gridded tillage system mapping. The code, programmed in R, can be used for reproducing and build upon for scenarios including the expansion of sustainable soil management practices as CA. Both, the data set and the R-code are described in detail in Porwollik et al. (2018, ESSD). The code is written in the statistical software 'R' using the 'raster', 'fields', and 'ncdf4' packages. We present the mapping result of six tillage systems for 42 crop types and potentially suitable Conservation Agriculture area as variables:1 = conventional annual tillage2 = traditional annual tillage3 = reduced tillage4 = Conservation Agriculture5 = rotational tillage6 = traditional rotational tillage7 = Scenario Conservation Agriculture area Reference system: WGS84Geographic extent: Longitude (min, max) (-180, 180), Latitude (min, max) (-56, 84)Resolution: 5 arc-minutesTime period covered: around the year 2005Type: NetCDF Dataset sources (with indication of reference):1. Grid cell allocation key to country: IFPRI/IIASA (2017, cell5m_allockey_xy.dbf.zip)2. Crop-specific physical cropland: IFPRI/IIASA (2017, spam2005v3r1_global_phys_area.geotiff.zip)3. SoilGrids depth to bedrock: Hengl et al. (2014)4. Aridity index: FAO (2015)5. Conservation Agriculture area: FAO (2016)6. Income level: World Bank (2017)7. Field size: Fritz et al. (2015)8. GLADIS - Water erosion: Nachtergaele et al. (2011) CHANGELOG for Version 1.1:improved calculation and mapping, for details see README.PDF
Data Types:
  • Software/Code
Radiance Light Trends is a GIS web application that is designed to quickly display information about radiance trends at a specific location (available online at https://lighttrends.lightpollutionmap.info). It uses data from two satellite systems, DMSP-OLS and VIIRS DNB, with data processing by NOAA. New VIIRS layers are added automatically as soon as NOAA makes them available to public. The web application allows the user to examine changes in nighttime light emissions (nearly) worldwide, from 1992 up until last month. From 1992 to 2013, data comes from the Operational Linescan System of the Defense Meteorological Satellite Program (DMSP) satellites. From 2012 to the present, data comes from the Day/Night Band of the Visible Infrared Imaging Radiometer Suite instrument (VIIRS DNB). Due to significant differences in the instruments (as described by Miller et al., 2013), it is not possible to have a single record running from 1992 to today. A description of the VIIRS DNB night lights product used in this application was published by Elvidge et al. (2017), the data used in the app can be accessed from the NOAA Earth Observation Group (EOG) Website: https://ngdc.noaa.gov/eog/download.html
Data Types:
  • Software/Code
BayHunter is an open source Python tool to perform an McMC transdimensional Bayesian inversion of receiver functions and/ or surface wave dispersion. It is inverting for the velocity-depth structure, the number of layers and noise parameters (noise correlation and amplitude). The forward modeling codes are provided within the package, but are easily replaceable with own codes. It is also possible to add (completely different) data sets. The BayWatch module can be used to live-stream the inversion while it is running: this makes it easy to see how each chain is exploring the parameter space, how the data fits and models change and in which direction the inversion progresses.
Data Types:
  • Software/Code
The scripts and workflow are supplementary material to "3D Modelling of Vertical Gravity Gradients and the delimitation of tectonic boundaries: The Caribbean oceanic domain as a case study" (Gómez-García et al., 2019). The codes include the calculation of the VGG response of a 3D lithospheric model, in spherical coordinates, using the software Tesseroids (Uieda, 2016). The "Readme_Workflow_2019_002.pdf" file provide very detail information about the structure of this repository, as well as the step-by-step for the scripts execution, and the list of the requiered software for the correct workflow performance. All the information provided here will allow the user to reproduce the results and figures of the main paper. Detailed information are also given in the associated README.
Data Types:
  • Software/Code
EMMA – End Member Modelling Analysis of grain-size data is a technique to unmix multimodal grain-size data sets, i.e., to decompose the data into the underlying grain-size distributions (loadings) and their contributions to each sample (scores). The R package EMMAgeo contains a series of functions to perform EMMA based on eigenspace decomposition. The data are rescaled and transformed to receive results in meaningful units, i.e., volume percentage. EMMA can be performed in a deterministic and two robust ways, the latter taking into account incomplete knowledge about model parameters. The model outputs can be interpreted in terms of sediment sources, transport pathways and transport regimes (loadings) as well as their relative importance throughout the sample space (scores).
Data Types:
  • Software/Code
The software package “ClassifyStorms” version 1.0.1 performs a classification of geomagnetic storms according to their interplanetary driving mechanisms based exclusively on magnetometer measurements from ground. In this version two such driver classes are considered for storms dating back to 1930. Class 0 contains storms driven by Corotating or Stream Interaction Regions (C/SIRs) and class 1 contains storms driven by Interplanetary Coronal Mass Ejections (ICMEs). The properties and geomagnetic responses of these two solar wind structures are reviewed, e.g., by Kilpua et al. (2017, http://doi.org/10.1007/s11214-017-0411-3). The classification task is executed by a supervised binary logistic regression model in the framework of python's scikit-learn library. The model is validated mathematically and physically by checking the driver occurrence statistics in dependence on the solar cycle phase and storm intensity. A detailed description of the classification model is given in Pick et al. (2019) to which this software is supplementary material. Under “Files” you can download ClassifyStorms-V1.0.1.zip, which contains the jupyter notebook “ClassifyStorms.ipynb” (https://jupyter.org/) and the python modules “Imports.py”, “Modules.py” and “Plots.py”. Check for an up-to-date release of the software on GitLab via https://gitext.gfz-potsdam.de/lpick/ClassifyStorms (under Project, Releases). The “Readme.md” file provides all information needed to run or modify “ClassifyStorms” from the GitLab source. The software depends on the input data set “Input.nc”, an xarray Dataset (http://xarray.pydata.org/en/stable) saved in NetCDF format (https://www.unidata.ucar.edu/software/netcdf), which you can also download under “Files”. It contains 1. the HMC index: a three-hour running mean with weights [0.25,0.5,0.25] of the original Hourly Magnetospheric Currents index (HMC index, http://doi.org/10.5880/GFZ.2.3.2018.006). 2. the geomagnetic observatory data: vector geomagnetic disturbances from 34 mid-latitude observatories during 1900-2015 in the Cartesian Centered Dipole coordinate system. The original observatory data was downloaded from the WDC for Geomagnetism, Edinburgh (http://www.wdc.bgs.ac.uk/) and processed as described in section 2.1 of Pick et al. (2019). 3. the “reference” geomagnetic storms: universal time hours of 868 geomagnetic storm peaks together with their interplanetary drivers (class labels 0 or 1, see above) as described in section 2.2 of Pick et al., 2019. These events are taken from published lists (Jian et al., 2006a, 2006b, 2011; Shen et al., 2017; Turner et al., 2009), which are gathered in the separate ASCII file “ReferenceEvents.txt” (under “Files”) for a quick overview. 4. additional quantities for plotting: time series of Kp (since 1932) and Dst (since 1957) geomagnetic indices from the WDC for Geomagnetism, Kyoto (http://wdc.kugi.kyoto-u.ac.jp/wdc/Sec3.html) as well as the yearly mean total sunspot number from WDC-SILSO, Royal Observatory of Belgium, Brussels (http://sidc.be/silso/datafiles). The output of ClassifyStorms is "StormsClassified.csv" (under “Files”). This table lists the Date (Year-Month-Day) and Time (Hour:Minutes:Seconds) of 7546 classified geomagnetic storms together with the predicted interplanetary driver class label (0 or 1) and the corresponding probability (between 0 and 1). Version history:20 Sep 2019: Version 1.0.1: Correction of plotting mistake in Figure m / Figure S4 (see gitlab repository for details)
Data Types:
  • Software/Code
Surrogate playground is an automated machine learning approach written for rapidly screening a large number of different models to serve as surrogates for a slow running simulator. This code was written for a reactive transport application where a fluid flow model (hydrodynamics) is coupled to a geochemistry simulator (reactions in time and space) to simulate scenarios such as underground storage of CO2 or hydrogen storage for excess energy from wind farms. The challenge for such applications is that the geochemistry simulator is typically slow compared to fluid dynamics and constitutes the main bottleneck for producing highly detailed simulations of such application scenarios. This approach attempts to find machine learning models that can replace the slow running simulator when trained on input-output data from the geochemistry simulator. The code may be of more general interest as this prototype can be used to screen many different machine learning models for any regression problem in general. To illustrate this it also includes a demonstration example using the Boston housing standard data-set.
Data Types:
  • Software/Code
Geochemical models are used to seek answers about composition and evolution of groundwater, spill remediation, viability of geothermal resources and other important geoscientific applications. To understand these processes, it is useful to evaluate geochemical model response to different input parameter combinations. Running the model with varying input parameters creates a large amount of output data. It is a challenge to screen this data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we developed a Visual Analytics approach in an ongoing collaboration between Geoinformatics and Hydrogeology sections of GFZ German Research Centre for Geosciences. We implement our approach as an interactive data exploration tool called the GCex. GCex is a Visual Analytics approach and prototype that supports interactive exploration of geochemical models. It encodes many-to-many input/output relationships by the simple yet effective approach called Stacked Parameter Relation (SPR). GCex assists in the setup of simulations, model runs, data collection and result exploration, greatly enhancing the user experience in tasks such uncertainty and sensitivity analysis, inverse modeling and risk assessment. While in principle model-agnostic, the prototype currently supports and is tied to the popular geochemical code PHREEQC. Modification to support other models would not be complicated. GCex prototype was originally written by Janis Jatnieks at GFZ-Potsdam. It relies on Rphree (R-PHREEQC geochemical simulation model interface) written by Marco De Lucia at GFZ-Potsdam. A compatible version of Rphee is bundled with this installation.,https://gitext.gfz-potsdam.de/sec15pub/GCex/tags/1.0,
Data Types:
  • Software/Code
The file is an XML Graph file, which can be used to process Sentinel-1 satellite images in the Sentinel Application Platform (SNAP). Using this file enables batch processing of Sentinel-1 (IW, GRDH) images. The preprocessing is optimized for land use classification. The following tools are executed: Read Subset (TR32) Apply Orbit File Calibration Terrain Flattening (using DGM1) Speckle Filter (Gamma Map 3x3) Range-Doppler Terrain Correction using DGM1 Conversion to dB Conversion of datatype
Data Types:
  • Software/Code
Environmental seismoloy is a scientific field that studies the seismic signals, emitted by Earth surface processes. This R package eseis provides all relevant functions to read/write seismic data files, prepare, analyse and visualise seismic data, and generate reports of the processing history. eseis contains a growing set of function to handle the complete workflow of environmental seismology, i.e., the scientific field that studies the seismic signals that are emitted by Earth surface processes. The package supports reading the two most common seismic data formats, general functions for preparational and analytical signal processing aswell as specified functions for handling signals generated by Earth surface processes. Finally, graphical plot functions are provided, too. The software package contains 51 functions and two example data sets (eseis-supplementary_material.zip). It makes use of a series of dependency packages described in the DESCRIPTION file of the package.
Data Types:
  • Software/Code