Data for: Application of deep learning models to provide a generalizable approach for cloud, shadow and land cover classification in PlanetScope and Sentinel-2 imagery

Published: 17 Sep 2019 | Version 1 | DOI: 10.17632/6gdybpjnwh.1
Contributor(s):

Description of this data

The dataset consists of PlanetScope and Sentinel-2 derived scenes (i.e. chips) collected over the Wet Tropics of Australia between December 1, 2016 and November 1, 2017. All PlanetScope imagery contains four bands of data: red, green, blue (RGB) and near infrared (NIR), and had a ground sample distance (GSD) of 3.125 m. In contrast, Sentinel-2 imagery was trimmed to contain only RGB and NIR bands, and resampled to 3.125 m resolution to match PlanetScope imagery. Here we refer to the Wet Tropics PlanetScope- and Sentinel-2-derived data as datasets T-PS and T-S2, respectively. The T-PS and T-S2 datasets were generated by splitting satellite imagery with a grid of 128×128 pixels (i.e. 400×400 m) into image scenes, and extracting a random sample of 2.5% and 0.5% scenes per time step, respectively. This resulted in 4,943 and 4,993 image scenes in T-PS and T-S2 datasets, respectively. Datasets T-PS and T-S2 were manually labeled with 12 labels split into three groups:

  1. Cloud labels (‘clear’, ‘partly cloudy’, ‘cloudy’, ‘haze’).
  2. Shade labels (‘unshaded’, ‘partly shaded’, ‘shaded’), which indicated the level of shade caused by cloud cover on a specific scene.
  3. Land cover labels (‘forest’, ‘bare ground’, ‘water’, ‘agriculture’, ‘habitation’).
    Labels belonging to cloud and shade label groups are mutually exclusive for each scene, that is, each image scene received exactly one shade label and one cloud label.

Experiment data files

This data is associated with the following publication:

Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery

Published in: ISPRS Journal of Photogrammetry and Remote Sensing

Latest version

  • Version 1

    2019-09-17

    Published: 2019-09-17

    DOI: 10.17632/6gdybpjnwh.1

    Cite this dataset

    Shendryk, Iurii (2019), “Data for: Application of deep learning models to provide a generalizable approach for cloud, shadow and land cover classification in PlanetScope and Sentinel-2 imagery”, Mendeley Data, v1 http://dx.doi.org/10.17632/6gdybpjnwh.1

Statistics

Views: 69
Downloads: 9

Categories

Remote Sensing, Image Classification, Convolutional Neural Network, Deep Learning, Satellite Image

Licence

CC BY 4.0 Learn more

The files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International licence.

What does this mean?

This dataset is licensed under a Creative Commons Attribution 4.0 International licence. What does this mean? You can share, copy and modify this dataset so long as you give appropriate credit, provide a link to the CC BY license, and indicate if changes were made, but you may not do so in a way that suggests the rights holder has endorsed you or your use of the dataset. Note that further permission may be required for any content within the dataset that is identified as belonging to a third party.

Report