Dataset Description
The National Environmental Policy Act Text Corpus (NEPATEC 1.0) is an AI-ready dataset related to NEPA documents collected by the joint effort between Pacific Northwest National Laboratory (PNNL) and Office of Policy (OP). The NEPATEC 1.0 contains data extracted from the Environmental Impact Statement (EIS) Database provided by United States Environmental Protection Agency. EIS is a particular type of NEPA document (in PDF form) that analyzes the potential environmental effects of a proposed federal project and identifies ways to mitigate those effects. The NEPATEC 1.0 contains textual data extracted from 28,212 documents across 2,917 different projects. There is total 4.8 million pages and over 3.6 billion tokens of textual data (using GPT2 tokenizer). Along with the textual data parsed through these 28k documents, we also add project metadata as listed below:
- List of agencies associated with the project
- List of states
- List of EPA Comment Letter Dates
- List of Federal Register Date
In addition to project-level metadata, NEPATEC 1.0 also contains Named Entities for each document at a granular page level. There are 5 entities extracted from text of each page as shown below:
- Name: Any name, ranging from name of person to name of project
- Date: Any reference to a specific date
- Agency: Any organization
- Location: Any location, ranging from location of the site to reference to street/county/state/country
- Title: Any relevant mention of title, aimed to extract document title
- Curated by: Pacific Northwest National Laboratory
- Funded by : Office of Policy, Department of Energy
- Language(s) (NLP): English
- License: CC0
Uses
NEPATEC1.0 is one of a kind unique dataset for the environmental Review/permitting domain. It can be used for various scientific studies, including (1) Training LLMs for domain adaptation, (2) leveraging NLP models, including LLMs for exploratory data analytics such as spatio-temporal trend analysis across project type, location, agencies, among others. Such studies can offer valuable insights about the NEPA processes that can inform future environmental reviews.
Usage
To downlaod and use the data using HuggingFace datasets library, use the following code
from datasets import load_dataset
data = load_dataset("PolicyAI/NEPATEC1.0")
Dataset Structure
The dataset is a list of EIS Project level dictionaries. Each dictionary has the following structure:
{
“Project Title”: Title of the EIS Project,
“State”: A set of states the EIS targets,
“Agency”: A set of agencies the EIS is associated with,
“EPA Comment Letter Date”: A set of public comment dates associated with the EIS documents ,
“Federal Register Date”: A set of registration dates associated with the EIS documents,
“Documents”: A list of documents associated with the EIS and their corresponding textual data and extracted named entities.
}
The documents associated with each EIS project datapoint each are structured in a dictionary format, with the following structure:
{
“Metadata”: A dictionary containing the document title,
“Pages”: A list of dictionaries for each page of the document, containing the textual data for that page as well as the extracted named entities
}
The named entities are also structured in a dictionary format, with the following structure:
{
“text”: The text for the named entity,
“label”: The label for the named entity,
“score”: Confidence score for the text to belong to the given label
}
Dataset Creation
The NEPATEC 1.0 dataset was scrapped from the United States Environmental Protection Agency data website by making an empty search, which returned all the documents in the database. There were two steps to this collection:
- Meta-Data collection: The EPA website provides an option to download metadata related to all the documents in the search
- Document Scrapping: In this step we scrapped and downloaded all the document retrieved from the search
Data Collection and Processing
After downloading the data, one of the major issues faced was merging documents by project names, as there were multiple projects with same name, as well as some projects with slightly changed names. To solve this issue, we followed a two-step merging process:
- Duplicate Merging: Merging titles and corresponding meta-data with duplicate names
- Fuzzy Merging: Merging similar titles using fuzzy matching and their corresponding meta-data
We used PyMuPDF to parse textual and image data from the downloaded documents. The parsed textual data was split by pages. We used these page-wise text to extract named entities using the GLiNER toolkit. The GLiNER model used accepts around 400 tokens per input, thus we processed and split the page-wise text to 150 words per batch and passed these batches through the GLiNER pipeline.
Limitations
Due to the merging process of similar titles projects, we had to drop over 7,000 documents and corresponding projects from our dataset. Hence, the NEPATEC 1.0 dataset does not contain all of the documents available on EPA website.
Acknowledgement
This work was supported by the Office of Policy, U.S. Department of Energy, and Pacific Northwest National Laboratory, which is operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract DE-AC05–76RLO1830. This dataset card has been cleared by PNNL for public release as PNNL-36100. The NEPATEC1.0 dataset has been cleared by PNNL for public release as PNNL-SA-199568.
- Downloads last month
- 87