File size: 1,929 Bytes
9c06e88
 
 
75c12f9
 
 
 
 
 
 
dd60af4
cbd46f2
dd60af4
cbd46f2
b32715a
cbd46f2
75c12f9
 
417d198
75c12f9
56f9d0e
 
 
 
 
75c12f9
 
d513aa8
 
 
 
 
 
75c12f9
 
d513aa8
 
9787517
75c12f9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: other
---

# WizardLM: An Instruction-following LLM Using Evol-Instruct

These files are the result of merging the [delta weights](https://huggingface.co/victor123/WizardLM) with the original Llama7B model.

The code for merging is provided in the [WizardLM official Github repo](https://github.com/nlpxucan/WizardLM).

The original WizardLM deltas are in float32, and this results in producing an HF repo that is also float32, and is much larger than a normal 7B Llama model.

Therefore for this repo I converted the merged model to float16, to produce a standard size 7B model.

This was achieved by running **`model = model.half()`** prior to saving.

## WizardLM-7B HF

This repo contains the full unquantised model files in HF format for GPU inference and as a base for quantisation/conversion.

## Other repositories available

* [4bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizardLM-7B-GGML)
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizardLM-7B-GPTQ)

# Original model info

## Full details in the model's Github page

[WizardLM official Github repo](https://github.com/nlpxucan/WizardLM).

## Overview of Evol-Instruct

Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.

Although on our complexity-balanced test set, WizardLM-7B outperforms ChatGPT in the high-complexity instructions, it still lag behind ChatGPT on the entire test set, and we also consider WizardLM to still be in a baby state. This repository will continue to improve WizardLM, train on larger scales, add more training data, and innovate more advanced large-model training methods.

![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_overall.png)
![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_running.png)