You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Palaeochannel

Dataset Description

This repository contains the MAPS (Multitemporal multispectral dAtaset for Palaeochannels Segmentation) dataset from the following paper:

The identification of subsoil elements in remote sensing images presents significant challenges, as these elements are only detectable indirectly through proxies such as soil and vegetation changes. Palaeochannels are remnants of ancient rivers and streams that have dried up due to climatic, geological, or anthropogenic factors, and represent a specific category of subsoil elements.

In this repository, we aim to establish a baseline for semantic segmentation to address the unique challenges of segmenting elongated and branched palaeochannels under diverse landscape conditions and across different times of the year, using Sentinel-2 multispectral imagery. The results provide preliminary insights into overcoming the challenges posed by the fluctuating visibility of subsoil elements by applying deep learning techniques to analyze temporal series data.

Dataset Creation

The Sentinel-2 images were acquired by ESA during the 2022 year and downloaded using Google Earth Engine in UTM projection. For each month, a single cloud-free image is selected, resulting in 12 images. The dataset was annotated by our in-house archaeologists in three periods of the year (P1= March, April, P2= July, August and P3= November, January).

The training set covers a large part of the large coastal plain in North-Eastern Italy, between the cities of Venice and Aquileia, resulting in an area of 1500 sq. km. The test set covers an additional part of the coastal plain south of Venice for 65 sq. km.

In the experimental settings, two training sets are used: TR1 - One image (month) for each period is selected (April, August, November images with corresponding annotations); TR2 - Two consecutive images (months) for each period are selected to augment the training data (March and April for P1 annotations, July August for P2 annotations and November, January for P3 annotations).

Data Instances

Each multispectral image is saved in .tif format, and the corresponding annotation is in .geojson format.

The data_loader has 6 different setups:

  • fold type: train, validation and test
  • data setup: TR1 and TR2 training sets

We developed the data loader and made it publicly available on the code repository.

Data Structure

The metadata information can be found in: metadata.parquet

The Sentinel-2 input images can be found in: Imagery/Train_Val/*.tif

Imagery/Test/*.tif

The annotation masks images can be found in: annotations/*.geojson

The precise annotated area is given in: Area Train_Val_Test/Train_Area.geojson Area Train_Val_Test/Validation_Area.geojson Area Train_Val_Test/Test_Area.geojson

Data loading

We publish the data, as well as the code, to load and reproduce our work. In particular, PyTorch dataloader can be found in the GitHub repository.

Citation Information

@article{,
  title={},
  author={},
  journal={},
  volume={},
  number={},
  pages={}
}

Contributions

Thanks to the Center for Cultural Heritage Technology - Istituto Italiano di Tecnologia of Venice, Italy for adding this dataset.

Downloads last month
0