michaelfeil
commited on
Commit
•
979ee96
1
Parent(s):
71eae22
Upload HuggingFaceH4/starchat-alpha ctranslate fp16 weights
Browse files
README.md
CHANGED
@@ -19,13 +19,14 @@ quantized version of [HuggingFaceH4/starchat-alpha](https://huggingface.co/Huggi
|
|
19 |
```bash
|
20 |
pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0
|
21 |
```
|
22 |
-
Converted on 2023-
|
23 |
```
|
24 |
-
ct2-transformers-converter --model HuggingFaceH4/starchat-alpha --output_dir /home/michael/tmp-ct2fast-starchat-alpha --force --copy_files merges.txt all_results.json training_args.bin tokenizer.json README.md dialogue_template.json tokenizer_config.json eval_results.json vocab.json TRAINER_README.md train_results.json generation_config.json trainer_state.json special_tokens_map.json added_tokens.json requirements.txt .gitattributes --quantization
|
25 |
```
|
26 |
|
27 |
-
Checkpoint compatible to [ctranslate2>=3.14.0](https://github.com/OpenNMT/CTranslate2)
|
28 |
-
-
|
|
|
29 |
- `compute_type=int8` for `device="cpu"`
|
30 |
|
31 |
```python
|
@@ -36,14 +37,14 @@ model_name = "michaelfeil/ct2fast-starchat-alpha"
|
|
36 |
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
|
37 |
model = GeneratorCT2fromHfHub(
|
38 |
# load in int8 on CUDA
|
39 |
-
model_name_or_path=model_name,
|
40 |
device="cuda",
|
41 |
compute_type="int8_float16",
|
42 |
# tokenizer=AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-alpha")
|
43 |
)
|
44 |
outputs = model.generate(
|
45 |
-
text=["
|
46 |
-
max_length=64,
|
47 |
include_prompt_in_result=False
|
48 |
)
|
49 |
print(outputs)
|
|
|
19 |
```bash
|
20 |
pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0
|
21 |
```
|
22 |
+
Converted on 2023-06-02 using
|
23 |
```
|
24 |
+
ct2-transformers-converter --model HuggingFaceH4/starchat-alpha --output_dir /home/michael/tmp-ct2fast-starchat-alpha --force --copy_files merges.txt all_results.json training_args.bin tokenizer.json README.md dialogue_template.json tokenizer_config.json eval_results.json vocab.json TRAINER_README.md train_results.json generation_config.json trainer_state.json special_tokens_map.json added_tokens.json requirements.txt .gitattributes --quantization int8_float16 --trust_remote_code
|
25 |
```
|
26 |
|
27 |
+
Checkpoint compatible to [ctranslate2>=3.14.0](https://github.com/OpenNMT/CTranslate2)
|
28 |
+
and [hf-hub-ctranslate2>=2.0.8](https://github.com/michaelfeil/hf-hub-ctranslate2)
|
29 |
+
- `compute_type=int8_float16` for `device="cuda"`
|
30 |
- `compute_type=int8` for `device="cpu"`
|
31 |
|
32 |
```python
|
|
|
37 |
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
|
38 |
model = GeneratorCT2fromHfHub(
|
39 |
# load in int8 on CUDA
|
40 |
+
model_name_or_path=model_name,
|
41 |
device="cuda",
|
42 |
compute_type="int8_float16",
|
43 |
# tokenizer=AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-alpha")
|
44 |
)
|
45 |
outputs = model.generate(
|
46 |
+
text=["def fibonnaci(", "User: How are you doing? Bot:"],
|
47 |
+
max_length=64,
|
48 |
include_prompt_in_result=False
|
49 |
)
|
50 |
print(outputs)
|
model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7e84a8449030244a99fbc3b68aca4f38746a337d4f1c353c0159985b3b9ecd84
|
3 |
+
size 15577696155
|