Datasets:

Modalities:
Tabular
Text
Formats:
json
Languages:
English
Size:
< 1K
Libraries:
Datasets
pandas
License:
File size: 2,433 Bytes
e093398
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
configs:
- config_name: LFAI_RAG_niah_v1
  data_files:
  - split: test
    path: "LFAI_RAG_niah_v1.json"
  default: true
license: apache-2.0
---

# LFAI_RAG_niah_v1

This dataset aims to be the basis for RAG-focused Needle in a Haystack evaluations for [LeapfrogAI](https://github.com/defenseunicorns/leapfrogai)🐸. 

## Dataset Details

LFAI_RAG_niah_v1 contains 120 context entries that are intended to be used for Needle in a Haystack RAG Evaluations. 

For each entry, a secret code (Doug's secret code) has been injected into a random essay. This secret code is the "needle" that is the goal to be found by an LLM.

Example:
```
{
  "context_length":512,
  "context_depth":0.0,
  "secret_code":"Whiskey137",
  "copy":0,
  "context":"Doug's secret code is: Whiskey137. Remember this. Venture funding works like gears. A typical startup goes through several rounds of funding, and at each round you want to take just enough money to reach the speed where you can shift into the next gear.\n\nFew startups get it quite right. Many are underfunded. A few are overfunded, which is like trying to start driving in third gear."
}
```

### Dataset Sources

Data was generated using the essays of [Paul Graham](https://www.paulgraham.com/articles.html) as the haystack that a random secret code is injected into.

## Uses

This dataset is ready to be used for Needle in a Haystack evaluations.

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

Each entry in this dataset contains the following fields:
- `context_length`: approximately how many characters the context field is (rounded to the nearest power of 2) 
- `context_depth`: approximately how far into the context the secret code phrased is injected, represented as a fraction of document depth
- `secret_code`: the secret code generated for the given entry. This is used to verify the LLM found the correct code
- `copy`: for each length and depth, the experiment should be repeated a few times, so this count refers to which instance of the repeated setup the entry is
- `context`: the portion of text with the injected secret code

## Dataset Card Authors
The Leapfrogai🐸 team at [Defense Unicorns](https://www.defenseunicorns.com/)🦄

## Dataset Card Contact
- [email protected]