Update README.md
Browse files
README.md
CHANGED
@@ -15,25 +15,29 @@ tags:
|
|
15 |
|
16 |
# Llama-3-Cantonese-8B-Instruct
|
17 |
|
18 |
-
## Model Overview
|
19 |
|
20 |
Llama-3-Cantonese-8B-Instruct is a Cantonese language model based on Meta-Llama-3-8B-Instruct, fine-tuned using LoRA. It aims to enhance Cantonese text generation and comprehension capabilities, supporting various tasks such as dialogue generation, text summarization, and question-answering.
|
21 |
|
22 |
-
|
|
|
|
|
23 |
|
24 |
- **Base Model**: Meta-Llama-3-8B-Instruct
|
25 |
- **Fine-tuning Method**: LoRA instruction tuning
|
26 |
- **Training Steps**: 4562 steps
|
27 |
-
- **Primary Language**: Cantonese
|
28 |
- **Datasets**:
|
29 |
- [jed351/cantonese-wikipedia](https://huggingface.co/datasets/jed351/cantonese-wikipedia)
|
30 |
- [lordjia/Cantonese_English_Translation](https://huggingface.co/datasets/lordjia/Cantonese_English_Translation)
|
31 |
- **Training Tools**: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
|
32 |
|
33 |
-
## Usage
|
34 |
|
35 |
You can easily load and use this model with Hugging Face's Transformers library. Here is a simple example:
|
36 |
|
|
|
|
|
37 |
```python
|
38 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
39 |
|
@@ -46,22 +50,24 @@ outputs = model.generate(**inputs)
|
|
46 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
47 |
```
|
48 |
|
49 |
-
## Quantized Version
|
50 |
|
51 |
A 4-bit quantized version of this model is also available: [llama3-cantonese-8b-instruct-q4_0.gguf](https://huggingface.co/lordjia/Llama-3-Cantonese-8B-Instruct/blob/main/llama3-cantonese-8b-instruct-q4_0.gguf).
|
52 |
|
53 |
-
|
|
|
|
|
54 |
|
55 |
For an alternative, consider [Qwen2-Cantonese-7B-Instruct](https://huggingface.co/lordjia/Qwen2-Cantonese-7B-Instruct), also fine-tuned by LordJia and based on Qwen2-7B-Instruct.
|
56 |
|
57 |
-
|
58 |
|
59 |
-
|
60 |
|
61 |
-
|
62 |
|
63 |
-
|
64 |
|
65 |
-
##
|
66 |
|
67 |
-
|
|
|
15 |
|
16 |
# Llama-3-Cantonese-8B-Instruct
|
17 |
|
18 |
+
## Model Overview / 模型概述
|
19 |
|
20 |
Llama-3-Cantonese-8B-Instruct is a Cantonese language model based on Meta-Llama-3-8B-Instruct, fine-tuned using LoRA. It aims to enhance Cantonese text generation and comprehension capabilities, supporting various tasks such as dialogue generation, text summarization, and question-answering.
|
21 |
|
22 |
+
Llama-3-Cantonese-8B-Instruct係基於Meta-Llama-3-8B-Struct嘅粵語語言模型,使用LoRA進行微調。 它旨在提高粵語文本的生成和理解能力,支持各種任務,如對話生成、文本摘要和問答。
|
23 |
+
|
24 |
+
## Model Features / 模型特性
|
25 |
|
26 |
- **Base Model**: Meta-Llama-3-8B-Instruct
|
27 |
- **Fine-tuning Method**: LoRA instruction tuning
|
28 |
- **Training Steps**: 4562 steps
|
29 |
+
- **Primary Language**: Cantonese / 粵語
|
30 |
- **Datasets**:
|
31 |
- [jed351/cantonese-wikipedia](https://huggingface.co/datasets/jed351/cantonese-wikipedia)
|
32 |
- [lordjia/Cantonese_English_Translation](https://huggingface.co/datasets/lordjia/Cantonese_English_Translation)
|
33 |
- **Training Tools**: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
|
34 |
|
35 |
+
## Usage / 用法
|
36 |
|
37 |
You can easily load and use this model with Hugging Face's Transformers library. Here is a simple example:
|
38 |
|
39 |
+
你可以輕鬆地將此模型與Hugging Face嘅Transformers庫一起使用。 下面係一個簡單嘅示例:
|
40 |
+
|
41 |
```python
|
42 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
43 |
|
|
|
50 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
51 |
```
|
52 |
|
53 |
+
## Quantized Version / 量化版本
|
54 |
|
55 |
A 4-bit quantized version of this model is also available: [llama3-cantonese-8b-instruct-q4_0.gguf](https://huggingface.co/lordjia/Llama-3-Cantonese-8B-Instruct/blob/main/llama3-cantonese-8b-instruct-q4_0.gguf).
|
56 |
|
57 |
+
此模型的4位量化版本也可用:[llama3-cantonese-8b-instruct-q4_0.gguf](https://huggingface.co/lordjia/Llama-3-Cantonese-8B-Instruct/blob/main/llama3-cantonese-8b-instruct-q4_0.gguf)。
|
58 |
+
|
59 |
+
## Alternative Model Recommendation / 備選模型舉薦
|
60 |
|
61 |
For an alternative, consider [Qwen2-Cantonese-7B-Instruct](https://huggingface.co/lordjia/Qwen2-Cantonese-7B-Instruct), also fine-tuned by LordJia and based on Qwen2-7B-Instruct.
|
62 |
|
63 |
+
對於替代方案,請考慮[Qwen2-Cantonese-7B-Instruct](https://huggingface.co/lordjia/Qwen2-Cantonese-7B-Instruct),同樣由LordJia微調並基於Qwen2-7B-Instruct。
|
64 |
|
65 |
+
## License / 許可證
|
66 |
|
67 |
+
This model is licensed under the Llama 3 Community License. Please review the terms before use.
|
68 |
|
69 |
+
此模型根據Llama 3社區許可證獲得許可。 請在使用前仔細閱讀呢啲條款。
|
70 |
|
71 |
+
## Contributors / 貢獻
|
72 |
|
73 |
+
- LordJia [https://ai.chao.cool](https://ai.chao.cool/)
|