Datasets:
language:
- en
Curated Manifold Markets Subset
There has been substantial interest in using large language models to answer forecasting competition questions like the ones found on Metaculus or Manifold Markets. Metaculus's API is restricted to teams that ask for permission to use it, but Manifold's API is openly available under very liberal terms. This makes Manifold an appealing option for forecasting model authors but for one problem: Manifold takes a libertarian approach to question moderation and allows a lot of junk markets on the platform. While this makes it an excellent incubator for new question formats and ideas, it can make training models based on the resulting data a little tricky. This dataset is the top 10,000 resolved Manifold Markets questions in yes/no format as graded by the criteria set out in the Limitations and Biases section below using an LLM evaluator. The result is a much higher signal dataset than what you would get by pulling from the API with minimal filtering.
Usage
Use Cases
- Baseline tuning strategy and validation set for answering forecasting questions
- Because forecasting questions are resolved yes/no they can be used to train weave evaluator
- Good foundation to backtranslate from to make further datasets
Quickstart With HuggingFace Datasets
import datasets
import datetime
def format_market_details(market):
question = market.get("question")
yes_probability = market.get("probability") * 100
no_probability = (1 - market.get("probability")) * 100
unique_bettor_count = market.get("uniqueBettorCount")
creator_name = market.get("creatorName")
created_time = datetime.datetime.fromtimestamp(market.get("createdTime") / 1000).strftime("%Y-%m-%d at %H:%M UTC")
close_time = datetime.datetime.fromtimestamp(market.get("closeTime") / 1000).strftime("%Y-%m-%d at %H:%M UTC")
text_description = market.get("textDescription")
resolution = market.get("resolution").title() + "."
out = ""
out += "Manifold Markets\n\n"
out += f"{question}\n"
out += f"YES {yes_probability:.2f}% NO {no_probability:.2f}% "
out += f"| {unique_bettor_count} Bettors\n"
out += f"Creator: {creator_name}\n"
out += f"Created: {created_time}\n"
out += f"Closes: {close_time}\n\n"
out += f"Description & Resolution Criteria: {text_description}\n\n"
out += f"Resolution: {resolution}"
return out
train = datasets.load_dataset("jdpressman/manifold-baseline-curated-v0")["train"]
for market_details in train:
print(format_market_details(market_details))
Raw Quickstart
import json
import datetime
def format_market_details(market):
question = market.get("question")
yes_probability = market.get("probability") * 100
no_probability = (1 - market.get("probability")) * 100
unique_bettor_count = market.get("uniqueBettorCount")
creator_name = market.get("creatorName")
created_time = datetime.datetime.fromtimestamp(market.get("createdTime") / 1000).strftime("%Y-%m-%d at %H:%M UTC")
close_time = datetime.datetime.fromtimestamp(market.get("closeTime") / 1000).strftime("%Y-%m-%d at %H:%M UTC")
text_description = market.get("textDescription")
resolution = market.get("resolution").title() + "."
out = ""
out += "Manifold Markets\n\n"
out += f"{question}\n"
out += f"YES {yes_probability:.2f}% NO {no_probability:.2f}% "
out += f"| {unique_bettor_count} Bettors\n"
out += f"Creator: {creator_name}\n"
out += f"Created: {created_time}\n"
out += f"Closes: {close_time}\n\n"
out += f"Description & Resolution Criteria: {text_description}\n\n"
out += f"Resolution: {resolution}"
return out
with open("train.json") as infile:
train = json.load(infile)
for market_details in train:
print(format_market_details(market_details))
License
While no explicit license is given for this dataset, the Manifold Markets API page informs the user they should "Feel free to use the API for any purpose you'd like." and provides a site dump as a convenience. This implies that the Manifold team should be okay with this dataset. If they're not they can contact me or HuggingFace to have it taken down.
Data Structure
The data structure is a list of Manifold Market Details JSON objects as they're given by the API. Here is a sample item:
{"id": "JOLqUM7VZVWGyPMyjgOM", "creatorId": "fP5OQUWYt4MW17A2giGjMGsw1uu2", "creatorUsername": "LarsDoucet", "creatorName": "Lars Doucet", "createdTime": 1640805909009, "creatorAvatarUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh_23ZmfLBMGBR2crNwb0T8hBnPAap5nkWiSKuB=s96-c", "closeTime": 1672531200000, "question": "Will Joe Rogan interview a guest about Georgism in 2022?", "slug": "will-joe-rogan-interview-a-guest-ab", "url": "https://manifold.markets/LarsDoucet/will-joe-rogan-interview-a-guest-ab", "pool": {"NO": 103.73708237350644, "YES": 996.054209916458}, "probability": 0.031616466242030815, "p": 0.23866581093751968, "totalLiquidity": 184.67960075647989, "outcomeType": "BINARY", "mechanism": "cpmm-1", "volume": 4123.3286725950675, "volume24Hours": 0, "isResolved": true, "resolution": "NO", "resolutionTime": 1672976192735, "resolutionProbability": 0.03, "uniqueBettorCount": 50, "lastUpdatedTime": 1672976168074, "lastBetTime": 1672069861903, "lastCommentTime": 1672976161444, "description": "This market will resolve to "Yes" if, by December 31, 11:59:59 PM CT, Joseph James Rogan (aka "Joe Rogan"), host of the "Joe Rogan Experience" on Spotify, invites a guest onto that podcast who mentions any of these three words -- "Georgism", "Geoism", or "Land Value Tax" -- in a favorable context.\n#JoeRogan\n#Georgism\n#Economics\n#Podcast", "groupSlugs": ["georgism", "politics-default", "economics-default"], "textDescription": "This market will resolve to "Yes" if, by December 31, 11:59:59 PM CT, Joseph James Rogan (aka "Joe Rogan"), host of the "Joe Rogan Experience" on Spotify, invites a guest onto that podcast who mentions any of these three words -- "Georgism", "Geoism", or "Land Value Tax" -- in a favorable context.\n#JoeRogan\n#Georgism\n#Economics\n#Podcast"}
Biases and Limitations
The curation was performed by SOLAR 10.7B base using the weave evaluator. Three rubrics were used to filter out undesirable traits in a market:
Because all the questions in this rubric are answered with "yes" the evaluator could be biased towards texts with "no" nature that make the evaluator answer no more frequently. I did a quick spot check that the distribution of yes and no resolutions on forecasting questions chosen didn't look very skewed, but it might be a good idea to get the distribution of yes and no resolutions in the dataset as a whole versus the subset I chose with weave evaluator. I will do this later.
Planned Improvements
- Train models on this dataset to get a forecasting baseline
- Check distribution of yes and no questions in chosen subset vs. the distribution on the full dataset
- Change weave evaluator questions to have a mix of yes and no answers desired