A newer version of the Gradio SDK is available:
5.5.0
๐น๐ญ OpenThaiGPT 1.0.0-beta
OpenThaiGPT Version 1.0.0-beta is a 7B-parameter LLaMA model finetuned to follow Thai translated instructions below and makes use of the Huggingface LLaMA implementation.
Support
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support here
- E-mail: [email protected]
License
- Source Code: Apache Software License 2.0.
- Weight: For research use only (due to the Facebook LLama's Weight LICENSE).
- Note that: A commercial use license for OpenThaiGPT 0.1.0 weight will be released later soon!
Code and Weight
- Libary Code: https://github.com/OpenThaiGPT/openthaigpt
- Finetune Code: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta
- Weight: https://huggingface.co/kobkrit/openthaigpt-0.1.0-beta
Sponsors
Pantip.com, ThaiSC
Powered by
OpenThaiGPT Volunteers, Artificial Intelligence Entrepreneur Association of Thailand (AIEAT), and Artificial Intelligence Association of Thailand (AIAT)
Authors
Kobkrit Viriyayudhakorn ([email protected]), Sumeth Yuenyong ([email protected]) and Thaweewat Ruksujarit ([email protected]).
Disclaimer: Provided responses are not guaranteed.
Local Setup
Install dependencies
pip install -r requirements.txt
If bitsandbytes doesn't work, install it from source. Windows users can follow these instructions.
Training (finetune.py
)
This file contains a straightforward application of PEFT to the LLaMA model, as well as some code related to prompt construction and tokenization. PRs adapting this code to support larger models are always welcome.
Example usage:
python finetune.py \
--base_model 'decapoda-research/llama-7b-hf' \
--data_path 'Thaweewat/alpaca-cleaned-52k-th' \
--output_dir './openthaigpt-010-beta'
We can also tweak our hyperparameters:
python finetune.py \
--base_model 'decapoda-research/llama-7b-hf' \
--data_path 'Thaweewat/alpaca-cleaned-52k-th' \
--output_dir './openthaigpt-010-beta' \
--batch_size 128 \
--micro_batch_size 4 \
--num_epochs 3 \
--learning_rate 1e-4 \
--cutoff_len 512 \
--val_set_size 2000 \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.05 \
--lora_target_modules '[q_proj,v_proj]' \
--train_on_inputs \
--group_by_length
Inference (generate.py
)
This file reads the foundation model from the Hugging Face model hub and the LoRA weights from kobkrit/openthaigpt-0.1.0-beta
, and runs a Gradio interface for inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.
Example usage:
python generate.py \
--load_8bit \
--base_model 'decapoda-research/llama-7b-hf' \
--lora_weights 'kobkrit/openthaigpt-0.1.0-beta'
Official weights
The most recent "official" OpenThaiGPT 0.1.0-beta adapter available at kobkrit/openthaigpt-0.1.0-beta
was trained on May 13 with the following command:
python finetune.py \
--base_model='decapoda-research/llama-7b-hf' \
--data_path '../datasets/cleaned' \
--num_epochs=3 \
--cutoff_len=2048 \
--group_by_length \
--output_dir='./openthaigpt-010-beta' \
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=64 \
--batch_size=64 \
--micro_batch_size=4
Checkpoint export (export_*_checkpoint.py
)
These files contain scripts that merge the LoRA weights back into the base model
for export to Hugging Face format and to PyTorch state_dicts
.
They should help users
who want to run inference in projects like llama.cpp
or alpaca.cpp.
Docker Setup & Inference
- Build the container image:
docker build -t openthaigpt-finetune-010beta .
- Run the container (you can also use
finetune.py
and all of its parameters as shown above for training):
docker run --gpus=all --shm-size 64g -p 7860:7860 -v ${HOME}/.cache:/root/.cache --rm openthaigpt-finetune-010beta generate.py \
--load_8bit \
--base_model 'decapoda-research/llama-7b-hf' \
--lora_weights 'kobkrit/openthaigpt-0.1.0-beta'
- Open
https://localhost:7860
in the browser
Docker Compose Setup & Inference
(optional) Change desired model and weights under
environment
in thedocker-compose.yml
Build and run the container
docker-compose up -d --build
Open
https://localhost:7860
in the browserSee logs:
docker-compose logs -f
- Clean everything up:
docker-compose down --volumes --rmi all