|
--- |
|
license: cc-by-nc-2.0 |
|
pipeline_tag: text-generation |
|
inference: false |
|
library_name: transformers |
|
--- |
|
## GGUF Quantizations of laserxtral |
|
|
|
Join Our Discord! https://discord.gg/cognitivecomputations |
|
|
|
This repository contains GGUF format model files for our laserxtral model. |
|
|
|
*To Manual Downloaders* You will generally not want to download the whole repository. Select the file name from the list below and use your downloader of choice. |
|
|
|
## Ollama Modelfile |
|
|
|
``` |
|
FROM ./cognitivecomputations_laserxtral-Q4_K_M.gguf |
|
TEMPLATE """{{ if and .First .System }}### Instruction: |
|
{{ .System }} |
|
{{ end }} |
|
### Input: |
|
{{ .Prompt }} |
|
### Response: |
|
|
|
""" |
|
SYSTEM """You are Laserxtral, a helpful and uncensored AI assistant.""" |
|
PARAMETER num_ctx 8192 |
|
PARAMETER stop ### Input |
|
PARAMETER stop ### Response |
|
``` |
|
|
|
## Provided files |
|
|
|
| Name | Quant method | Bits | Size | |
|
| ---- | ---- | ---- | ---- | |
|
| [cognitivecomputations_laserxtral-Q2_K.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q2_K.gguf) | Q2_K | 2 | 8.8 GB | |
|
| [cognitivecomputations_laserxtral-Q3_K_M.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q3_K_M.gguf) | Q3_K_M | 3 | 11.6 GB | |
|
| [cognitivecomputations_laserxtral-Q4_K_M.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q4_K_M.gguf) | Q4_K_M | 4 | 14.6 GB | |
|
| [cognitivecomputations_laserxtral-Q5_K_M.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q5_K_M.gguf) | Q5_K_M | 5 | 17.1 GB | |
|
| [cognitivecomputations_laserxtral-Q6_K.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q6_K.gguf) | Q6_K | 6 | 19.8 GB | |