Edit model card

Model Card for GeoLM model for Toponym Recognition

A language model for detecting toponyms (i.e. place names) from sentences. We pretrain the GeoLM model on world-wide OpenStreetMap (OSM), WikiData and Wikipedia data, then fine-tune it for Toponym Recognition task on GeoWebNews dataset

Model Details

Model Description

Pretrain the GeoLM model on world-wide OpenStreetMap (OSM), WikiData and Wikipedia data, then fine-tune it for Toponym Recognition task on GeoWebNews dataset

Uses

This is a fine-tuned GeoLM model for toponym detection task. The inputs are sentences and outputs are detected toponyms.

To use this model, please refer to the code below.

  • Option 1: Load weights to a BERT model (Same procedure as the demo on the right side panel)

import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer


# Model name from Hugging Face model hub
model_name = "zekun-li/geolm-base-toponym-recognition"

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)

# Example input sentence
input_sentence = "Minneapolis, officially the City of Minneapolis, is a city in the state of Minnesota and the county seat of Hennepin County."

# Tokenize input sentence
tokens = tokenizer.encode(input_sentence, return_tensors="pt")

# Pass tokens through the model
outputs = model(tokens) 

# Retrieve predicted labels for each token
predicted_labels = torch.argmax(outputs.logits, dim=2)

predicted_labels = predicted_labels.detach().cpu().numpy()

# Decode predicted labels
predicted_labels = [model.config.id2label[label] for label in predicted_labels[0]]

# Print predicted labels
print(predicted_labels)
# ['O', 'B-Topo', 'O', 'O', 'O', 'O', 'O', 'B-Topo', 'O', 'O', 'O', 'O', 'O', 'O',
# 'O', 'O', 'B-Topo', 'O', 'O', 'O', 'O', 'O', 'B-Topo', 'I-Topo', 'I-Topo', 'O', 'O', 'O']
  • Option 2: Load weights to a GeoLM model

To appear soon

Training Details

Training Data

GeoWebNews (Credit to Gritta et al.)

Download link: https://github.com/milangritta/Pragmatic-Guide-to-Geoparsing-Evaluation/blob/master/data/GWN.xml

Training Procedure

Speeds, Sizes, Times

More information needed

Evaluation

Testing Data & Metrics & Results

Testing Data

More information needed

Metrics

More information needed

Results

More information needed

Technical Specifications [optional]

Model Architecture and Objective

More information needed

Compute Infrastructure

More information needed

Bias, Risks, and Limitations

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.

Citation

BibTeX:

More information needed

APA:

More information needed

Model Card Author [optional]

Downloads last month
429
Safetensors
Model size
108M params
Tensor type
F32
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using zekun-li/geolm-base-toponym-recognition 2