Datasets:

Modalities:
Tabular
Text
Formats:
json
Languages:
English
Size:
< 1K
Libraries:
Datasets
pandas
License:
LFAI_RAG_niah_v1 / README.md
jalling's picture
Update README.md
798341a verified
|
raw
history blame
2.64 kB
---
language:
- en
license: apache-2.0
configs:
- config_name: LFAI_RAG_niah_v1
data_files:
- split: base_eval
path: LFAI_RAG_niah_v1.json
- split: 64k_eval
path: long_contexts/LFAI_RAG_niah_v1_64k.json
- split: 128k_eval
path: long_contexts/LFAI_RAG_niah_v1_128k.json
- split: padding
path: haystack_padding.json
default: true
---
# LFAI_RAG_niah_v1
This dataset aims to be the basis for RAG-focused Needle in a Haystack evaluations for [LeapfrogAI](https://github.com/defenseunicorns/leapfrogai)🐸.
## Dataset Details
LFAI_RAG_niah_v1 contains 120 context entries that are intended to be used for Needle in a Haystack RAG Evaluations.
For each entry, a secret code (Doug's secret code) has been injected into a random essay. This secret code is the "needle" that is the goal to be found by an LLM.
Example:
```
{
"context_length":512,
"context_depth":0.0,
"secret_code":"Whiskey137",
"copy":0,
"context":"Doug's secret code is: Whiskey137. Remember this. Venture funding works like gears. A typical startup goes through several rounds of funding, and at each round you want to take just enough money to reach the speed where you can shift into the next gear.\n\nFew startups get it quite right. Many are underfunded. A few are overfunded, which is like trying to start driving in third gear."
}
```
### Dataset Sources
Data was generated using the essays of [Paul Graham](https://www.paulgraham.com/articles.html) as the haystack that a random secret code is injected into.
## Uses
This dataset is ready to be used for Needle in a Haystack evaluations.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Each entry in this dataset contains the following fields:
- `context_length`: approximately how many characters the context field is (rounded to the nearest power of 2)
- `context_depth`: approximately how far into the context the secret code phrased is injected, represented as a fraction of document depth
- `secret_code`: the secret code generated for the given entry. This is used to verify the LLM found the correct code
- `copy`: for each length and depth, the experiment should be repeated a few times, so this count refers to which instance of the repeated setup the entry is
- `context`: the portion of text with the injected secret code
## Dataset Card Authors
The Leapfrogai🐸 team at [Defense Unicorns](https://www.defenseunicorns.com/)🦄
## Dataset Card Contact
- [email protected]