koesn commited on
Commit
ca33d29
1 Parent(s): 1415508

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +173 -3
README.md CHANGED
@@ -1,3 +1,173 @@
1
- ---
2
- license: llama3.1
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ language:
4
+ - en
5
+ tags:
6
+ - llama3.1
7
+ - rag
8
+ - brag
9
+ - chatrag
10
+ - chatqa
11
+ ---
12
+
13
+ ## Description
14
+ This repo contains GGUF format model files for BRAG-Llama-3.1-8b-v0.1.
15
+
16
+ ## Files Provided
17
+ | Name | Quant | Bits | File Size | Remark |
18
+ | -------------------------------- | ----- | ---- | --------- | -------------------------------- |
19
+ | brag-llama-3.1-8b-v0.1.Q2_K.gguf | Q2_K | 2 | 3.18 GB | 2.96G, +3.5199 ppl @ Llama-3-8B |
20
+ | brag-llama-3.1-8b-v0.1.Q3_K.gguf | Q3_K | 3 | 4.02 GB | 3.74G, +0.6569 ppl @ Llama-3-8B |
21
+ | brag-llama-3.1-8b-v0.1.Q4_0.gguf | Q4_0 | 4 | 4.66 GB | 4.34G, +0.4685 ppl @ Llama-3-8B |
22
+ | brag-llama-3.1-8b-v0.1.Q4_K.gguf | Q4_K | 4 | 4.92 GB | 4.58G, +0.1754 ppl @ Llama-3-8B |
23
+ | brag-llama-3.1-8b-v0.1.Q5_K.gguf | Q5_K | 5 | 5.73 GB | 5.33G, +0.0569 ppl @ Llama-3-8B |
24
+ | brag-llama-3.1-8b-v0.1.Q6_K.gguf | Q6_K | 6 | 6.60 GB | 6.14G, +0.0217 ppl @ Llama-3-8B |
25
+ | brag-llama-3.1-8b-v0.1.Q8_0.gguf | Q8_0 | 8 | 8.54 GB | 7.96G, +0.0026 ppl @ Llama-3-8B |
26
+
27
+ ## Parameters
28
+ | path | type | architecture | rope_theta | sliding_win | max_pos_embed |
29
+ | ------------------------------------- | ----- | ---------------- | ---------- | ----------- | ------------- |
30
+ | meta-llama/Meta-Llama-3.1-8B-Instruct | llama | LlamaForCausalLM | 500000.0 | null | 131072 |
31
+
32
+
33
+ # Original Model Card
34
+
35
+
36
+ # BRAG-Llama-3.1-8b-v0.1 Model Card
37
+
38
+ ## Model Description
39
+
40
+ BRAG-Llama-3.1-8b-v0.1 is part of the [BRAG series of SLMs (Small Language Models)](https://huggingface.co/collections/maximalists/brag-v01-66aefc10e3a6c29c496c7476), specifically trained for RAG (Retrieval-Augmented Generation) tasks, including:
41
+
42
+ 1. RAG with tables and text.
43
+ 2. RAG with conversational chat.
44
+
45
+ **Authors**: [Pratik Bhavasar](https://www.linkedin.com/in/bhavsarpratik/), [Ravi Theja](https://www.linkedin.com/in/ravidesetty/)
46
+
47
+ ## Key Features
48
+
49
+ - **Capabilities**: RAG tasks with both tables and text, as well as conversational chat.
50
+ - **Model Size**: 8 billion parameters
51
+ - **Context Length**: Supports up to 128k tokens
52
+ - **Language**: Trained and evaluated for English, but the base model has multi-lingual capabilities.
53
+
54
+ ## Other Resources
55
+
56
+ [BRAG-v0.1 Model Collection](https://huggingface.co/collections/maximalists/brag-v01-66aefc10e3a6c29c496c7476) | [Blog](https://themaximalists.substack.com/p/brag)
57
+
58
+ ## Performance
59
+
60
+ | Model Type | Model Name | Model Size | Context Length | ChatRAG-Bench (all) |
61
+ |----------------|---------------------------|------------|----------------|---------------------|
62
+ | LLM | Command-R-Plus | -- | 128k | 50.93 |
63
+ | LLM | GPT-4-Turbo-2024-04-09 | -- | 128k | 54.03 |
64
+ | SLM | ChatQA-1.5-8B | 8b | 8k | 55.17 |
65
+ | BRAG SLM | BRAG-Qwen2-7b-v0.1 | 7b | 128k | 53.23 |
66
+ | BRAG SLM | BRAG-Llama-3.1-8b-v0.1 | 8b | 128k | 52.29 |
67
+ | BRAG SLM | BRAG-Llama-3-8b-v0.1 | 8b | 8k | 51.70 |
68
+ | BRAG Ultra SLM | BRAG-Qwen2-1.5b-v0.1 | 1.5b | 32k | 46.43 |
69
+
70
+ ## Usage
71
+
72
+ #### Prompt Format
73
+
74
+ Below is the message prompt format required for using the model.
75
+
76
+ ```
77
+ messages = [
78
+ {"role": "system", "content": "You are an assistant who gives helpful, detailed, and polite answers to the user's questions based on the context with appropriate reasoning as required. Indicate when the answer cannot be found in the context."},
79
+ {"role": "user", "content": """Context: <CONTEXT INFORMATION> \n\n <USER QUERY>"""},
80
+ ]
81
+
82
+ ```
83
+
84
+ #### Running with the `pipeline` API
85
+
86
+ ```python
87
+ import transformers
88
+ import torch
89
+
90
+ model_id = "maximalists/BRAG-Llama-3.1-8b-v0.1"
91
+
92
+ pipeline = transformers.pipeline(
93
+ "text-generation",
94
+ model=model_id,
95
+ model_kwargs={"torch_dtype": torch.bfloat16},
96
+ device_map="auto",
97
+ )
98
+
99
+ messages = [
100
+ {"role": "system", "content": "You are an assistant who gives helpful, detailed, and polite answers to the user's questions based on the context with appropriate reasoning as required. Indicate when the answer cannot be found in the context."},
101
+ {"role": "user", "content": """Context:\nArchitecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.\n\nTo whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?"""},
102
+ ]
103
+
104
+ outputs = pipeline(
105
+ messages,
106
+ max_new_tokens=256,
107
+ )
108
+
109
+ print(outputs[0]["generated_text"][-1])
110
+ ```
111
+
112
+ #### Running the model on a single / multi GPU
113
+
114
+ ```python
115
+ # pip install accelerate
116
+ from transformers import AutoTokenizer, AutoModelForCausalLM
117
+ import torch
118
+
119
+ model_id = "maximalists/BRAG-Llama-3.1-8b-v0.1"
120
+
121
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
122
+ model = AutoModelForCausalLM.from_pretrained(
123
+ model_id,
124
+ device_map="auto",
125
+ )
126
+
127
+ messages = [
128
+ {"role": "system", "content": "You are an assistant who gives helpful, detailed, and polite answers to the user's questions based on the context with appropriate reasoning as required. Indicate when the answer cannot be found in the context."},
129
+ {"role": "user", "content": """Context:\nArchitecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.\n\nTo whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?"""},
130
+ ]
131
+
132
+ text = tokenizer.apply_chat_template(
133
+ messages,
134
+ tokenize=False,
135
+ add_generation_prompt=True
136
+ )
137
+ model_inputs = tokenizer([text], return_tensors="pt").to("cuda")
138
+
139
+ generated_ids = model.generate(
140
+ model_inputs.input_ids,
141
+ max_new_tokens=256
142
+ )
143
+ generated_ids = [
144
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
145
+ ]
146
+
147
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
148
+
149
+ print(response)
150
+ ```
151
+
152
+
153
+ ## Limitations
154
+
155
+ The model is specifically trained for short contexts and may not perform well with longer ones. It has been fine-tuned on an English dataset. To avoid underperformance and the potential for hallucinations, please use the system prompt mentioned above.
156
+
157
+ ## Citation
158
+
159
+ To cite this model, please use the following:
160
+
161
+ ```bibtex
162
+ @misc{BRAG-Llama-3.1-8b-v0.1,
163
+ title = {BRAG-Llama-3.1-8b-v0.1},
164
+ year = {2024},
165
+ publisher = {HuggingFace},
166
+ url = {https://huggingface.co/maximalists/BRAG-Llama-3.1-8b-v0.1},
167
+ author = {Pratik Bhavsar and Ravi Theja}
168
+ }
169
+ ```
170
+
171
+ ## Additional Information
172
+
173
+ For more details on the BRAG series and updates, please refer to the official [blog](https://themaximalists.substack.com/p/brag).