File size: 1,168 Bytes
aaf3578
 
 
f51f39d
 
1e4fd71
aaf3578
 
 
 
049f708
aaf3578
902a281
aaf3578
 
 
 
 
0642c11
 
902a281
aaf3578
 
0642c11
aaf3578
 
d6a40db
0642c11
902a281
0642c11
d6a40db
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: apache-2.0
inference: false 
base_model: llmware/bling-tiny-llama-v0
base_model_relation: quantized 
tags: [green, llmware-rag, p1, ov]
---

# bling-tiny-llama-ov

**bling-tiny-llama-ov** is a very small, very fast fact-based question-answering model, designed for retrieval augmented generation (RAG) with complex business documents, quantized and packaged in OpenVino int4 for AI PCs using Intel GPU, CPU and NPU.    

This model is one of the smallest and fastest in the series.  For higher accuracy, look at larger models in the BLING/DRAGON series.    

### Model Description

- **Developed by:** llmware  
- **Model type:** tinyllama  
- **Parameters:** 1.1 billion  
- **Quantization:** int4  
- **Model Parent:** [llmware/bling-tiny-llama-v0](https://www.huggingface.co/llmware/bling-tiny-llama-v0)    
- **Language(s) (NLP):** English  
- **License:** Apache 2.0  
- **Uses:** Fact-based question-answering, RAG  
- **RAG Benchmark Accuracy Score:** 86.5  


## Model Card Contact  
[llmware on github](https://www.github.com/llmware-ai/llmware)  
[llmware on hf](https://www.huggingface.co/llmware)  
[llmware website](https://www.llmware.ai)