File size: 7,678 Bytes
69c96f1 5d43d7f 69c96f1 125a51f 1997934 69c96f1 1902613 69c96f1 a2d59cf 69c96f1 a2d59cf 69c96f1 125a51f 69c96f1 125a51f 69c96f1 125a51f 69c96f1 5d43d7f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
---
license: mit
model-index:
- name: selfrag_llama2_7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=selfrag/selfrag_llama2_7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 78.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=selfrag/selfrag_llama2_7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.0
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=selfrag/selfrag_llama2_7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.73
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=selfrag/selfrag_llama2_7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.16
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=selfrag/selfrag_llama2_7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 10.99
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=selfrag/selfrag_llama2_7b
name: Open LLM Leaderboard
---
This model is a 7B [Self-RAG](https://selfrag.github.io/) model that generates outputs to diverse user queries as well as *reflection tokens* to call the retrieval system adaptively and criticize its own output and retrieved passages.
Self-RAG is trained on our instruction-following corpora with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback.
At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences.
See full descriptions in See full descriptions in [our paper](https://arxiv.org/abs/2310.11511).
## Usage
Here, we show an easy way to quickly download our model from HuggingFace and run with `vllm` with pre-given passages. Make sure to install dependencies listed at [self-rag/requirements.txt](https://github.com/AkariAsai/self-rag/blob/main/requirements.txt).
To run our full inference pipeline with a retrieval system and fine-grained tree decoding, please use [our code](https://github.com/AkariAsai/self-rag).
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
from vllm import LLM, SamplingParams
model = LLM("selfrag/selfrag_llama2_7b", download_dir="/gscratch/h2lab/akari/model_cache", dtype="half")
sampling_params = SamplingParams(temperature=0.0, top_p=1.0, max_tokens=100, skip_special_tokens=False)
def format_prompt(input, paragraph=None):
prompt = "### Instruction:\n{0}\n\n### Response:\n".format(input)
if paragraph is not None:
prompt += "[Retrieval]<paragraph>{0}</paragraph>".format(paragraph)
return prompt
query_1 = "Leave odd one out: twitter, instagram, whatsapp."
query_2 = "Can you tell me the difference between llamas and alpacas?"
queries = [query_1, query_2]
preds = model.generate([format_prompt(query) for query in queries], sampling_params)
for pred in preds:
print("Model prediction: {0}".format(pred.outputs[0].text))
# Model prediction: Twitter, Instagram, and WhatsApp are all social media platforms.[No Retrieval]WhatsApp is the odd one out because it is a messaging app, while Twitter and # Instagram are primarily used for sharing photos and videos.[Utility:5]</s> (this query doesn't require factual grounding; just skip retrieval and do normal instruction-following generation)
# Model prediction: Sure![Retrieval]<paragraph> ... (this query requires factual grounding, call a retriever)
# generate with retrieved passage
prompt = format_prompt("Can you tell me the difference between llamas and alpacas?", paragraph="The alpaca (Lama pacos) is a species of South American camelid mammal. It is similar to, and often confused with, the llama. Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.")
preds = model.generate([prompt], sampling_params)
print([pred.outputs[0].text for pred in preds])
# ['[Relevant]Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.[Fully supported][Utility:5]</s>']
```
## Input Format
As described in the `format_prompt` function, your input should be formed as
```
### Instruction:\n{instruction}\n\n### Response:\n".format(instruction)
```
or
```
### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
```
If you have additional input.
You can insert paragraphs anywhere after `### Response:\n"`, but make sure to mark paragraphs as paragraph tokens (i.e., `<paragraph>{0}</paragraph>`).
## Training details
Our training data is available at the HuggingFace dataset [selfrag_train_data](https://huggingface.co/datasets/selfrag/selfrag_train_data).
See our official repository for the training details.
We used 8 A100 40GB for training on the Stability HPC server.
## Citation and contact
If you use this model, please cite our work:
```
@article{asai2023selfrag,
author = {Asai, Akari and Wu, Zeqiu and Wang, Yizhong and Sil, Avirup and Hajishirzi, Hannaneh},
title = {{Self-RAG}: Learning to Retrieve, Generate, and Critique through Self-Reflection},
year = {2023},
journal = { arXiv preprint arXiv:2310.11511 },
URL = {https://arxiv.org/abs/2310.11511}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_selfrag__selfrag_llama2_7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |51.30|
|AI2 Reasoning Challenge (25-Shot)|51.45|
|HellaSwag (10-Shot) |78.48|
|MMLU (5-Shot) |52.00|
|TruthfulQA (0-shot) |41.73|
|Winogrande (5-shot) |73.16|
|GSM8k (5-shot) |10.99|
|