joshvm commited on
Commit
783a4f0
1 Parent(s): efaf5bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md CHANGED
@@ -52,4 +52,138 @@ configs:
52
  path: data/train-*
53
  - split: test
54
  path: data/test-*
 
 
 
 
 
 
 
 
 
 
 
55
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  path: data/train-*
53
  - split: test
54
  path: data/test-*
55
+ license: cc-by-4.0
56
+ task_categories:
57
+ - image-segmentation
58
+ tags:
59
+ - trees
60
+ - biology
61
+ - ecology
62
+ - forest
63
+ pretty_name: OAM-TCD
64
+ size_categories:
65
+ - 1K<n<10K
66
  ---
67
+ # Dataset Card for OAM-TCD
68
+
69
+ ## Dataset Details
70
+
71
+ OAM-TCD is a dataset of high-resolution (10 cm/px) tree cover maps with instance-level masks for 280k trees and 56k tree groups.
72
+
73
+ Images in the dataset are provided as 2048x2048 px RGB GeoTIFF tiles. The dataset can be used to train both instance segmentation models and semantic segmentation models.
74
+
75
+ ### Dataset Description
76
+
77
+ - **Curated by:** Restor / ETH Zurich
78
+ - **Funded by:** Restor / ETH Zurich , supported by a Google.org AI for Social Good grant (ID: TF2012-096892, AI and ML for advancing the monitoring of Forest Restoration)
79
+ - **License:** nnotations are predominantly released under a CC-BY 4.0 license, with around 10% licensed as CC BY-NC 4.0 or CC BY-SA 4.0. These less permissive images are distributed in separate repositories to avoid any ambiguity for downstream use.
80
+
81
+ OIN declares that all imagery contained within is licensed as [CC-BY 4.0](https://github.com/openimagerynetwork/oin-register) however some images are labelled as CC BY-NC 4.0 or CC BY-SA 4.0 in their metadata.
82
+
83
+ To ensure that image providers' rights are upheld, we split these images into license-specific repositories, allowing users to pick which combinations of compatible licenses are appropriate for their application. We have initially released model variants that are trained on CC BY + CC BY-NC imagery. CC BY-SA imagery was removed from the training split, but it can be used for evaluation.
84
+
85
+ ### Dataset Sources [optional]
86
+
87
+ All imagery in the dataset is sourced from OpenAerialMap (OAM, part of the Open Imagery Network / OIN).
88
+
89
+ ## Uses
90
+
91
+ We anticipate that most users of the dataset wish to map tree cover in aerial orthomosaics, either captured by drones/unmanned aerial vehicles (UAVs) or from aerial surveys such as those provided by governmental organisations.
92
+
93
+
94
+ ### Direct Use
95
+
96
+ The dataset supports applications where the user provides an RGB input image and expects a tree (canopy) map as an output. Depending on the type of trained model, the result could be a binary segmentation mask or a list of detected trees/groups of tree instances. The dataset can also be combined with other license-compatible data sources to train models, aside from our baseline releases. The dataset can also act as a benchmark for other tree detection models; we specify a test split which users can evaluate against, but currently there is no formal infrastructure or a leader board for this.
97
+
98
+ ### Out-of-Scope Use
99
+
100
+ The dataset does not contained detailed annotations for trees that are in closed canopy i.e. are touching. Thus the current release is not suitable for training models to delineate individual trees in closed canopy forest. The dataset contains images at a fixed resolution of 10 cm/px. Models trained on this dataset at nominal resolution may under-perform if applied to images with significantly different resolutions (e.g. satellite imagery).
101
+
102
+ The dataset does not directly support applications related to carbon sequestration measurement (e.g. carbon credit verification) or above ground biomass estimation as it does not contain any structural or species information which is required for accurate allometric calculations (Reierson et. al, 2021). Similarly models trained on the dataset should not be used for any decision-making or policy applications without further validation on appropriate data, particularly if being tested in locations that are under-represented in the dataset.
103
+
104
+ ## Dataset Structure
105
+
106
+ The dataset contains pairs of images, semantic masks and object segments (instance polygons). The masks contain instance-level annotations for (1) individual **tree**s and (2) groups of trees, which we label **canopy**. For training our models we binarise the masks. Metadata from OAM for each image is provided and described below.
107
+
108
+ The dataset is released with suggested training and test splits, stratified by biome. These splits were used to derive results presented in the main paper. Where known, each image is also tagged with its terrestrial biome index [-1, 14]. This relationship was defined by looking for intersections between tile polygons and reference biome polygons, an index of -1 means a biome wasn't able to be matched. Tiles sourced from a given OAM image are isolated to a single fold (and split) to avoid train/test leakage.
109
+
110
+ k-fold cross-validation indices within the training set are also provided. That is, each image is assigned an integer [0, 4] which assigns it to a validation fold. Users are also free to pick their own validation protocol (for example one could split the data into biome folds), but results may not be directly comparable with results from the release paper.
111
+
112
+ ## Dataset Creation
113
+
114
+
115
+ ### Curation Rationale
116
+
117
+ The use-case within Restor (Crowther et. al, 2022) is to feed into a broader framework for restoration site assessment. Many users of the Restor platform are stakeholders in restoration projects; some have access to tools like UAVs and are interested in providing data for site monitoring. Our goal was to facilitate training tree canopy detection models that would work robustly in any location. The dataset was curated with this diversity challenge in mind - it contains images from around the world and (by serendipity) covers most terrestrial biome classes.
118
+
119
+ It was important during the curation process that the data sources be open-access and so we selected OpenAerialMap as our image source. OAM contains a large amount of permissively licensed global imagery at high resolution (chosen to be < 10 cm/px for our application).
120
+
121
+ ### Source Data
122
+
123
+ #### Data Collection and Processing
124
+
125
+ We used the OAM API to download a list of surveys on the platform. Using the metadata, we discarded surveys that had a ground sample distance of greater than 10 cm/px (for example satellite imagery). The remaining sites were binned into 1 degree square regions across the world. There are sites in OAM that have been uploaded as multiple assets, and naive random sampling would tend to pick several from the same location. We then sampled sites from each bin and random non-empty tiles from each site until we had reached around 5000 tiles. This was arbitrarily constrained by our estimated annotation budget.
126
+
127
+ Interestingly we did not make any attempt to filter for images that had trees, but in practice there are few negative images in the dataset. Similarly we did not try to filter for images captured in a particular season, so there are trees without leaves in the dataset.
128
+
129
+ #### Who are the source data producers?
130
+
131
+ The images are provided by users of OpenAerialMap / contributors of Open Imagery Network.
132
+
133
+ ### Annotations
134
+
135
+ #### Annotation process
136
+
137
+ Annotation was outsourced to commercial data labelling companies who provided access to teams of professional annotators. We experimented with several labelling providers and compensation strategies.
138
+
139
+ Annotators were provided with a guideline document that provided examples of how we expected images should be labeled. This document evolved over the course of the project as we encountered edge cases and questions from annotation teams. As described in the main paper, annotators were instructed to attempt to label open canopy trees individually (i.e. trees that were not touching). If possible, small groups of trees should also be labelled individually and we suggested < 5 trees as an upper bound. Annotators were encouraged to look for cues that indicated whether an object was a tree or not, such as the presence of (relatively long) shadows and crown shyness (inter-crown spacing). Larger groups of trees, or ambiguous regions would be labelled as "canopy". Annotators were provided with full size image tiles (2048 x 2048) and most images were annotated by a single person from a team of several annotators.
140
+
141
+ There are numerous structures for annotator compensation - for example, paying per polygon, paying per image and paying by total annotation time. The images in OAM-TCD are complex and per-image was excluded early on as the reported annotation time varied significantly. Anecdotally we found that the most practical compensation structure was to pay for a fixed block of annotation time with regular review meetings with labeling team managers. Overall, the cost per image was between 5-10 USD and the total annotation cost was approximately 25k USD. Unfortunately we do not have accurate estimates for time spent annotating all images, but we did advise annotators that if they spent more than 45-60 minutes on a single image that they should flag it for review.
142
+
143
+ #### Who are the annotators?
144
+
145
+ We did not have direct contact with any annotators and their identities were anonymised during communication, for example when providing feedback through managers.
146
+
147
+ #### Personal and Sensitive Information
148
+
149
+ Contact information is present in the metadata for imagery. We do not distribute this data directly, but each image tile is accompanied by a URL pointing to a JSON document on OpenAerialMap where it is publicly available. Otherwise, the imagery is provided at a low enough resolution that it is not possible to identify individual people.
150
+
151
+ The image tiles in the dataset contain geospatial information which is not obfuscated, however as one of the purposes of OpenAerialMap is humanitarian mapping (e.g. tracing objects for inclusion in OpenStreetMap), accurate location information is required and uploaders are aware that this information would be available to other users. We also assume that image providers had the right to capture imagery where they did, including following local regulations that govern UAV activity.
152
+
153
+ An argument for keeping accurate geospatial information is that annotations can be verified against independent sources, for example global land cover maps. The annotations can also be combined with other datasets like multispectral satellite imagery or products like Global Ecosystem Dynamics Investigation (GEDI, Dubayah et. al, 2020)
154
+
155
+ ## Bias, Risks, and Limitations
156
+
157
+ There are several potential sources of bias in our dataset. The first is geographic, related to where users of OAM are likely to capture data - accessible locations that are amenable to UAV flights. Some locations and countries place strong restrictions on UAV possession and use, for example. One of the use-cases for OAM is providing traceable imagery for OpenStreetMap which is also likely to bias what sorts of scenes users capture.
158
+
159
+ The second is bias from annotators, who were not ecologists. Benchmark results from models trained on the dataset suggest that overall label quality is sufficient for accurate semantic segmentation. However, for instance segmentation annotators had freedom the choose whether to individually label trees or not. This naturally resulted in some inconsistency between what annotators determined was a tree, and at what point to annotate a group of trees as a group. We discuss in the main paper the issue of conflicting definitions for "tree" among researchers and monitoring protocols.
160
+
161
+ The example annotations above highlight some of the inconsistencies described above. Some annotators labeled individual trees within group labels; in the bottom plot most palm trees are individually segmented, but some groups are not. A future goal for the project is to attempt to improve label consistency, identify incorrect labels and attempt to split group labels into individuals. After annotation was complete, we contracted two different labelling organisations to review (and re-label) subsets of the data; we have not released this data yet, but plan to in the future.
162
+
163
+ The greatest risk that we foresee om releasing this dataset is usage in out-of-scope scenarios. For example, using trained models on imagery from regions/biomes that the dataset is not representative of without additional validation. Similarly there is a risk that users apply the model in inappropriate ways, such as measuring canopy cover on imagery taken during periods of abscission (when trees lose leaves). It is important that users carefully consider timing (seasonality) when comparing time-series predictions.
164
+
165
+ While we believe that the risk of malicious or unethical use is low - given that other global tree maps exist and are readily available - it is possible that models trained on the dataset could be used to identify areas of tree cover for illegal logging or other forms of land exploitation. Given that our models can segment tree cover at high resolution, it could also be used for automated surveillance or military mapping purposes.
166
+
167
+ ### Recommendations
168
+
169
+ Please read the bias information above and take it into when using the dataset. Ensure that you have a good validation protocol in place before using a model trained on this dataset.
170
+
171
+ ## Citation
172
+
173
+ If you use OAM-TCD in your own work or research, please cite our arXiv paper: and reference the dataset DOI.
174
+
175
+ **BibTeX:**
176
+
177
+ TBD
178
+
179
+ **APA:**
180
+
181
+ TBD
182
+
183
+ ## Dataset Card Authors
184
+
185
+ Josh Veitch-Michaelis (josh [at] restor.eco)
186
+
187
+ ## Dataset Card Contact
188
+
189
+ Please contact josh [at] restor.eco if you have any queries about the dataset, including requests for image removal if you believe your rights have been infringed.