starchat-beta / README.md
lewtun's picture
lewtun HF staff
Update README.md
e775955
|
raw
history blame
3.29 kB
metadata
tags:
  - generated_from_trainer
model-index:
  - name: starchat-beta
    results: []
license: bigcode-openrail-m
BigScience Logo

Model Card for StarChat Beta

StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat Beta is the second model in the series, and is a fine-tuned version of StarCoderPlus that was trained on an "uncensored" variant of the openassistant-guanaco dataset. We found that removing the in-built alignment of the OpenAssistant dataset boosted performance on the Open LLM Leaderboard and made the model more helpful at coding tasks. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.

Intended uses & limitations

The model was fine-tuned on a variant of the OpenAssistant/oasst1 dataset, which contains a diverse range of dialogues in over 35 languages. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.

Training and evaluation data

StarChat Beta is trained on an "uncensored" variant of the openassistant-guanaco dataset. We applied the same recipe used to filter the ShareGPT datasets behind the WizardLM.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 256
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 6

Training results

Training Loss Epoch Step Validation Loss
1.5321 0.98 15 1.2856
1.2071 1.97 30 1.2620
1.0162 2.95 45 1.2853
0.8484 4.0 61 1.3274
0.6981 4.98 76 1.3994
0.5668 5.9 90 1.4720

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3