Datasets:
Modalities:
Text
Formats:
parquet
Languages:
Thai
Size:
10K - 100K
Tags:
instruction-finetuning
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,27 @@
|
|
1 |
---
|
2 |
license: cc-by-sa-3.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
1 |
---
|
2 |
license: cc-by-sa-3.0
|
3 |
+
task_categories:
|
4 |
+
- question-answering
|
5 |
+
- summarization
|
6 |
+
language:
|
7 |
+
- th
|
8 |
+
tags:
|
9 |
+
- instruction-finetuning
|
10 |
+
size_categories:
|
11 |
+
- 10K<n<100K
|
12 |
+
---
|
13 |
+
|
14 |
+
# Summary
|
15 |
+
|
16 |
+
🇹🇭 Thai-instructed dataset translated from [gbharti/wealth-alpaca_lora](https://huggingface.co/datasets/gbharti/wealth-alpaca_lora) using Google Cloud Translation.
|
17 |
+
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5
|
18 |
+
Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora
|
19 |
+
|
20 |
+
Supported Tasks:
|
21 |
+
- Training LLMs
|
22 |
+
- Synthetic Data Generation
|
23 |
+
- Data Augmentation
|
24 |
+
|
25 |
+
Languages: Thai
|
26 |
+
Version: 1.0
|
27 |
---
|