|
--- |
|
license: apache-2.0 |
|
inference: false |
|
--- |
|
|
|
# dragon-mistral-0.3-gguf |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
dragon-mistral-0.3-gguf is part of the DRAGON model series, RAG-instruct trained for fact-based question-answering use cases on top of a Mistral 7b v0.3 base model. |
|
|
|
|
|
### Benchmark Tests |
|
|
|
Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) |
|
1 Test Run (with temperature = 0.0 and sample = False) with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. |
|
|
|
--**Accuracy Score**: **99.5** correct out of 100 |
|
--Not Found Classification: 95.0% |
|
--Boolean: 82.5% |
|
--Math/Logic: 67.5% |
|
--Complex Questions (1-5): 4 (Above Average - multiple-choice, causal) |
|
--Summarization Quality (1-5): 4 (Above Average) |
|
--Hallucinations: No hallucinations observed in test runs. |
|
|
|
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). |
|
|
|
Note: compare results with [dragon-mistral-7b](https://www.huggingface.co/llmware/dragon-mistral-7b-v0). |
|
|
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Developed by:** llmware |
|
- **Model type:** dragon-rag-instruct |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache 2.0 |
|
- **Finetuned from model:** Mistral-7B-0.3-Base |
|
|
|
Details on the prompt wrapper and other configurations are on the config.json file in the files repository. |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
To pull the model via API: |
|
|
|
from huggingface_hub import snapshot_download |
|
snapshot_download("llmware/dragon-mistral-0.3-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False) |
|
|
|
Load in your favorite GGUF inference engine, or try with llmware as follows: |
|
|
|
from llmware.models import ModelCatalog |
|
|
|
# to load the model and make a basic inference |
|
model = ModelCatalog().load_model("llmware/dragon-mistral-0.3-gguf", temperature=0.0, sample=False) |
|
response = model.inference(query, add_context=text_sample) |
|
|
|
Details on the prompt wrapper and other configurations are on the config.json file in the files repository. |
|
|
|
|
|
## Model Card Contact |
|
|
|
Darren Oberst & llmware team |
|
|