Update README.md
Browse files
README.md
CHANGED
@@ -7,197 +7,96 @@ language:
|
|
7 |
tags:
|
8 |
- RAG
|
9 |
---
|
10 |
-
#
|
11 |
|
12 |
-
|
13 |
-
|
14 |
-
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
15 |
|
16 |
## Model Details
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
<!-- Provide a longer summary of what this model is. -->
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
- **Developed by:** [More Information Needed]
|
25 |
-
- **Funded by [optional]:** [More Information Needed]
|
26 |
-
- **Shared by [optional]:** [More Information Needed]
|
27 |
-
- **Model type:** [More Information Needed]
|
28 |
-
- **Language(s) (NLP):** [More Information Needed]
|
29 |
-
- **License:** [More Information Needed]
|
30 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
31 |
-
|
32 |
-
### Model Sources [optional]
|
33 |
-
|
34 |
-
<!-- Provide the basic links for the model. -->
|
35 |
-
|
36 |
-
- **Repository:** [More Information Needed]
|
37 |
-
- **Paper [optional]:** [More Information Needed]
|
38 |
-
- **Demo [optional]:** [More Information Needed]
|
39 |
|
40 |
## Uses
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
### Direct Use
|
45 |
-
|
46 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
47 |
-
|
48 |
-
[More Information Needed]
|
49 |
-
|
50 |
-
### Downstream Use [optional]
|
51 |
-
|
52 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
53 |
-
|
54 |
-
[More Information Needed]
|
55 |
-
|
56 |
-
### Out-of-Scope Use
|
57 |
-
|
58 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
59 |
-
|
60 |
-
[More Information Needed]
|
61 |
-
|
62 |
-
## Bias, Risks, and Limitations
|
63 |
-
|
64 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
65 |
-
|
66 |
-
[More Information Needed]
|
67 |
-
|
68 |
-
### Recommendations
|
69 |
-
|
70 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
71 |
-
|
72 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
73 |
|
74 |
## How to Get Started with the Model
|
75 |
|
76 |
-
Use the code below to get started with the model
|
77 |
-
|
78 |
-
[More Information Needed]
|
79 |
-
|
80 |
-
## Training Details
|
81 |
-
|
82 |
-
### Training Data
|
83 |
-
|
84 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
85 |
-
|
86 |
-
[More Information Needed]
|
87 |
-
|
88 |
-
### Training Procedure
|
89 |
-
|
90 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
91 |
-
|
92 |
-
#### Preprocessing [optional]
|
93 |
-
|
94 |
-
[More Information Needed]
|
95 |
-
|
96 |
-
|
97 |
-
#### Training Hyperparameters
|
98 |
-
|
99 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
100 |
-
|
101 |
-
#### Speeds, Sizes, Times [optional]
|
102 |
-
|
103 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
104 |
-
|
105 |
-
[More Information Needed]
|
106 |
-
|
107 |
-
## Evaluation
|
108 |
-
|
109 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
110 |
-
|
111 |
-
### Testing Data, Factors & Metrics
|
112 |
-
|
113 |
-
#### Testing Data
|
114 |
-
|
115 |
-
<!-- This should link to a Dataset Card if possible. -->
|
116 |
-
|
117 |
-
[More Information Needed]
|
118 |
-
|
119 |
-
#### Factors
|
120 |
-
|
121 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
122 |
-
|
123 |
-
[More Information Needed]
|
124 |
-
|
125 |
-
#### Metrics
|
126 |
-
|
127 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
### Results
|
132 |
-
|
133 |
-
[More Information Needed]
|
134 |
-
|
135 |
-
#### Summary
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
## Model Examination [optional]
|
140 |
-
|
141 |
-
<!-- Relevant interpretability work for the model goes here -->
|
142 |
-
|
143 |
-
[More Information Needed]
|
144 |
-
|
145 |
-
## Environmental Impact
|
146 |
-
|
147 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
148 |
|
149 |
-
|
|
|
150 |
|
151 |
-
|
152 |
-
|
153 |
-
- **Cloud Provider:** [More Information Needed]
|
154 |
-
- **Compute Region:** [More Information Needed]
|
155 |
-
- **Carbon Emitted:** [More Information Needed]
|
156 |
|
157 |
-
|
|
|
158 |
|
159 |
-
|
160 |
|
161 |
-
[
|
162 |
|
163 |
-
|
164 |
|
165 |
-
|
166 |
|
167 |
-
|
168 |
|
169 |
-
|
170 |
|
171 |
-
|
172 |
|
173 |
-
|
|
|
|
|
|
|
|
|
174 |
|
175 |
-
|
176 |
|
177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
|
179 |
-
|
|
|
180 |
|
181 |
-
|
|
|
|
|
|
|
|
|
182 |
|
183 |
-
|
|
|
|
|
|
|
|
|
184 |
|
185 |
-
|
|
|
186 |
|
187 |
-
##
|
188 |
|
189 |
-
|
|
|
190 |
|
191 |
-
[More Information Needed]
|
192 |
|
193 |
-
##
|
194 |
|
195 |
-
|
|
|
196 |
|
197 |
-
## Model Card Authors [optional]
|
198 |
|
199 |
-
[More Information Needed]
|
200 |
|
201 |
-
## Model Card Contact
|
202 |
|
203 |
-
[More Information Needed]
|
|
|
7 |
tags:
|
8 |
- RAG
|
9 |
---
|
10 |
+
# Llama-3-8B-RAG-v1
|
11 |
|
12 |
+
This model is a fine-tuned version of Llama-3-8B for Retrieval-Augmented Generation (RAG) tasks.
|
|
|
|
|
13 |
|
14 |
## Model Details
|
15 |
|
16 |
+
* **Developed by:** GlaiveAI
|
17 |
+
* **Model type:** Llama-3 finetuned for RAG tasks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
## Uses
|
20 |
|
21 |
+
This model is designed for RAG tasks, where it can answer questions based on provided documents.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
## How to Get Started with the Model
|
24 |
|
25 |
+
Use the code below to get started with the model:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
+
```python
|
28 |
+
from transformers import AutoTokenizer, pipeline
|
29 |
|
30 |
+
model_name = "glaiveai/Llama-3-8B-RAG-v1"
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
|
|
|
|
|
32 |
|
33 |
+
# Example user query
|
34 |
+
user_query = """Document:0
|
35 |
|
36 |
+
Title: Financial Compliance and Company Statements
|
37 |
|
38 |
+
Text: [Document text...]
|
39 |
|
40 |
+
Document:1
|
41 |
|
42 |
+
Title: Certification of Financial Reports by CEO of Black Knight, Inc.
|
43 |
|
44 |
+
Text: [Document text...]
|
45 |
|
46 |
+
Answer Mode: Grounded
|
47 |
|
48 |
+
Question: How does the CEO of Black Knight, Inc. ensure compliance with the Securities Exchange Act of 1934, and what are the implications of the certification provided?"""
|
49 |
|
50 |
+
# Prepare chat template
|
51 |
+
chat = [
|
52 |
+
{"role": "system", "content": "You are a conversational AI assistant that is provided a list of documents and a user query to answer based on information from the documents. The user also provides an answer mode which can be 'Grounded' or 'Mixed'. For answer mode Grounded only respond with exact facts from documents, for answer mode Mixed answer using facts from documents and your own knowledge. Cite all facts from the documents using <co: doc_id></co> tags."},
|
53 |
+
{"role": "user", "content": user_query}
|
54 |
+
]
|
55 |
|
56 |
+
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
|
57 |
|
58 |
+
# Set up pipeline and generate response
|
59 |
+
pipe = pipeline('text-generation', model=model_name, tokenizer=model_name, device=0)
|
60 |
+
output = pipe(
|
61 |
+
prompt,
|
62 |
+
max_length=1000,
|
63 |
+
num_return_sequences=1,
|
64 |
+
top_k=50,
|
65 |
+
top_p=0.95,
|
66 |
+
temperature=0.5,
|
67 |
+
do_sample=True,
|
68 |
+
return_full_text=False
|
69 |
+
)
|
70 |
|
71 |
+
print(output[0]['generated_text'])
|
72 |
+
```
|
73 |
|
74 |
+
## Output of above code block
|
75 |
+
```
|
76 |
+
CopyCited Documents: 1
|
77 |
+
Answer: <co:1>The CEO of Black Knight, Inc., Anthony M. Jabbour, ensures compliance with the Securities Exchange Act of 1934 by certifying that the periodic financial report and financial statements comply fully with the requirements of Section 13(a) or 15(d) of the Act. This certification confirms that the information in the reports fairly presents, in all material respects, the financial condition and results of operations of Black Knight, Inc.</co> This certification is crucial as it assures stakeholders of the reliability of the financial statements provided by the company, thereby maintaining investor confidence and adherence to legal financial reporting standards.
|
78 |
+
In this output:
|
79 |
|
80 |
+
The model cites Document 1 as the source of its information.
|
81 |
+
It provides a grounded answer based on the content of Document 1, as requested in the "Answer Mode: Grounded" instruction.
|
82 |
+
The answer explains how the CEO ensures compliance and the implications of the certification.
|
83 |
+
The citation is marked with <co:1></co> tags, indicating that this information comes directly from Document 1.
|
84 |
+
```
|
85 |
|
86 |
+
## Code Explanation
|
87 |
+
The code is split into two main parts:
|
88 |
|
89 |
+
## Chat Template Preparation:
|
90 |
|
91 |
+
We create a chat list with a system message and a user query.
|
92 |
+
The apply_chat_template method is used to format this chat into a prompt suitable for the model.
|
93 |
|
|
|
94 |
|
95 |
+
## Pipeline Setup and Generation:
|
96 |
|
97 |
+
We set up a text-generation pipeline with our model and tokenizer.
|
98 |
+
The prepared prompt is passed to the pipeline to generate a response.
|
99 |
|
|
|
100 |
|
|
|
101 |
|
|
|
102 |
|
|