|
--- |
|
base_model: smallcloudai/Refact-1_6B-fim |
|
license: bigscience-openrail-m |
|
model_creator: Small Magellanic Cloud AI |
|
model_name: Refact-1.6B |
|
pipeline_tag: text-generation |
|
prompt_template: '<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>' |
|
pretrain-datasets: |
|
- books |
|
- arxiv |
|
- c4 |
|
- falcon-refinedweb |
|
- wiki |
|
- github-issues |
|
- stack_markdown |
|
- self-made dataset of permissive github code |
|
datasets: |
|
- bigcode/the-stack-dedup |
|
- rombodawg/2XUNCENSORED_MegaCodeTraining188k |
|
- bigcode/commitpackft |
|
tags: |
|
- code |
|
language: |
|
- en |
|
--- |
|
# Refact-1.6B-fim-GGUF |
|
- Model creator: [Small Magellanic Cloud AI](https://huggingface.co/smallcloudai) |
|
- Original model: [Refact-1.6B](https://huggingface.co/smallcloudai/Refact-1_6B-fim) |
|
|
|
|
|
## Description |
|
This repository contains quantized GGUF format model files for [Refact-1.6B](https://huggingface.co/smallcloudai/Refact-1_6B-fim). |
|
|
|
|
|
## Prompt: fill in the middle |
|
``` |
|
<fim_prefix>def print_hello_world():\n """<fim_suffix>\n print("Hello world!")<fim_middle> |
|
``` |
|
|
|
|
|
## Prompt: chat (experimental) |
|
``` |
|
<empty_output>SYSTEM You are a programming assistant |
|
<empty_output>USER How do I sort a list in Python? |
|
<empty_output>ASSISTANT |
|
``` |
|
|
|
|
|
## Example `llama.cpp` command |
|
```shell |
|
./main -m refact-1_6b-Q4_K_M.gguf -c 4096 -n -1 -p '<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>' |
|
``` |
|
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) |
|
|