File size: 2,034 Bytes
f321565
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- Yi
- llama
- llama 2
license: other
license_name: yi-license
license_link: LICENSE
datasets:
- jondurbin/airoboros-2.2.1
---
# airoboros-2.2.1-y34b

Unofficial training of [Jon Durbin](https://huggingface.co/jondurbin)'s powerful airoboros 2.2.1 dataset on [Charles Goddard](https://huggingface.co/chargoddard)'s [Llama-fied Yi 34B model](https://huggingface.co/chargoddard/Yi-34B-Llama), aiming to bring the instruction-following capabilities of the airoboros dataset to the new Yi 34B foundational model.

As a 34B model with grouped-query attention, users will be able to conduct inference on the model with 4bit quantization on a single 24gb consumer GPU.

This Yi model is "Llama-fied", meaning the keys are renamed to match those used in Llama models, eliminating the need for remote code and ensuring compatibility with existing training and inference repositories. Architecturally this is similar to a Llama 2 34B model with an expanded vocab size of 64000.

This model is retrained thanks to compute provided by [alpin](https://huggingface.co/alpindale) with a monkeypatch to the trainer to resolve EOS token issues in the prompter. A smaller batch size and learning rate were used and training was extended by one epoch. 8-bit lora was also used instead of qlora.

## Usage:

The intended prompt format is the modified Vicuna 1.1 instruction format used by airoboros v2:
```
A chat.
USER: {prompt}
ASSISTANT:
```

## Training Details:

The model was trained using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) as a lora adapter on 1x A100 80gb GPU for 4 epochs, before being fused to the base model with PEFT.

## License:

This model is built on the Yi 34B base model, which has its own custom license included in this repository.

Please refer to the [airoboros 2.2.1 dataset card](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1) regarding the usage of gpt-4 API calls in creating the dataset.