ISPRS Journal of Photogrammetry and Remote Sensing

ISSN: 0924-2716

Visit Journal website

Datasets associated with articles published in ISPRS Journal of Photogrammetry and Remote Sensing

Filter Results
5 results
  • The dataset includes 45 natural forest scan clips with manually labelled classes (1: stem, 2: branch, 3: other).
    Data Types:
    • Dataset
    • File Set
  • The "1-traing data.xlsx" contains the logistic regression model training data involved in the tree-based loop closure detection. The "2-result data.xlsx" contains the tree position references, the estimates based on the RTAB-Map method and the optimized results based on our tree-based backend.
    Data Types:
    • Tabular Data
    • Dataset
  • The dataset consists of PlanetScope and Sentinel-2 derived scenes (i.e. chips) collected over the Wet Tropics of Australia between December 1, 2016 and November 1, 2017. All PlanetScope imagery contains four bands of data: red, green, blue (RGB) and near infrared (NIR), and had a ground sample distance (GSD) of 3.125 m. In contrast, Sentinel-2 imagery was trimmed to contain only RGB and NIR bands, and resampled to 3.125 m resolution to match PlanetScope imagery. Here we refer to the Wet Tropics PlanetScope- and Sentinel-2-derived data as datasets T-PS and T-S2, respectively. The T-PS and T-S2 datasets were generated by splitting satellite imagery with a grid of 128×128 pixels (i.e. 400×400 m) into image scenes, and extracting a random sample of 2.5% and 0.5% scenes per time step, respectively. This resulted in 4,943 and 4,993 image scenes in T-PS and T-S2 datasets, respectively. Datasets T-PS and T-S2 were manually labeled with 12 labels split into three groups: 1) Cloud labels (‘clear’, ‘partly cloudy’, ‘cloudy’, ‘haze’). 2) Shade labels (‘unshaded’, ‘partly shaded’, ‘shaded’), which indicated the level of shade caused by cloud cover on a specific scene. 3) Land cover labels (‘forest’, ‘bare ground’, ‘water’, ‘agriculture’, ‘habitation’). Labels belonging to cloud and shade label groups are mutually exclusive for each scene, that is, each image scene received exactly one shade label and one cloud label.
    Data Types:
    • Dataset
    • File Set
  • In the coastal areas of Northeast Brazil, coconut growers are replacing the tall varieties by the dwarf ones, following incentives for the coconut’s water market. The current study aimed dwarf coconut water productivity (WP) assessments to subsidize the rational irrigation scheduling of dwarf coconut orchards, by using Landsat 8 images together with agrometeorological data during the year 2016 in the Camocim County, coastal zone of the Ceará state. The SAFER algorithm was used to acquire the actual evapotranspiration (ET), while for the biomass production (BIO) estimations we applied the Monteith’s radiation use efficiency model (RUE). The highest ET and BIO rates, above 4.0 mm d-1 and 140 kg ha-1 d-1, respectively, happened from May to July, retrieving WP pixel values (BIO/ET) larger than 3.5 kg m-3. From the moisture indicator tendencies, considered as the ratio of ET to reference evapotranspiration (ET0), it was noticed some water stress conditions, with ET/ET0 dropping below 0.60 from the start of August to the end of the year, affecting the WP values. Considering also WP in terms of fruits and coconut’s water produced, it averaged 1.9 coconut fruits and 0.8 liters of coconut water per cubic meter of water consumed, respectively. The models tested can be employed as a tool for management, agro-climatic zonation and irrigation scheduling for dwarf coconut in the Brazilian Northeast.
    Data Types:
    • Image
    • Dataset
  • Sample scripts to project images on a digital terrain model (DTM), followed by plot-based sample extraction Development repository: https://gitlab.ethz.ch/crop_phenotyping/PhenoFly_data_processing_tools Based on: Roth, Aasen, Walter, Liebisch 2018: Extracting leaf area index using viewing geometry effects—A new perspective on high-resolution unmanned aerial system photography, ISPRS Journal of Photogrammetry and Remote Sensing. https://doi.org/10.1016/j.isprsjprs.2018.04.012 Example (see Demo.py): Input: Agisoft camera position file (_TestData/sonyDX100II/camera_position_RGB.txt) Digital terrain model in STL file format (_TestData/sonyDX100II/DTM.stl) Folder with images (_TestData/sonyDX100II/raw/) Polygon sampling layer in geoJSON format (_TestData/sonyDX100II/samples_plots.geojson) Output: Per-image coordinate and viewpoint information (_Demo_output/dg/) Image boundaries in wold coordinates as geoJSON (_Demo_output/dg/DG_field_of_views.geojson) Extracted plot-based image parts (_Demo_output/samples/) Viewpoint of each extracted sample (_Demo_output/samples/viewpoint.csv)
    Data Types:
    • Dataset
    • File Set