metadata
language:
- en
- zh
license: other
library_name: transformers
tags:
- llama
- qwen
- qwen1.5
- qwen2
license_name: qwen
license_link: >-
https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
pipeline_tag: text-generation
inference: false
model-index:
- name: Qwen1.5-7B-Chat_llamafy
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 57.59
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Minami-su/Qwen1.5-7B-Chat_llamafy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 78.52
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Minami-su/Qwen1.5-7B-Chat_llamafy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.18
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Minami-su/Qwen1.5-7B-Chat_llamafy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.59
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Minami-su/Qwen1.5-7B-Chat_llamafy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.46
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Minami-su/Qwen1.5-7B-Chat_llamafy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 14.63
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Minami-su/Qwen1.5-7B-Chat_llamafy
name: Open LLM Leaderboard
This is the LLaMAfied version of Qwen1.5-7B-Chat model by Alibaba Cloud. The original codebase can be found at: (https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py). I have made modifications to make it compatible with qwen1.5. This model is converted with https://github.com/Minami-su/character_AI_open/blob/main/llamafy_qwen_v2.py
Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen1.5-7B-Chat_llamafy")
model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen1.5-7B-Chat_llamafy", torch_dtype="auto", device_map="auto")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
messages = [
{"role": "user", "content": "Who are you?"}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(inputs,max_length=2048, streamer=streamer)
Test
load in 4bit
hf-causal (pretrained=Qwen1.5-7B-Chat), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4155|± |0.0144|
| | |acc_norm|0.4480|± |0.0145|
|truthfulqa_mc| 1|mc1 |0.3513|± |0.0167|
| | |mc2 |0.5165|± |0.0159|
|winogrande | 0|acc |0.6330|± |0.0135|
load in 4bit
hf-causal (pretrained=Qwen1.5-7B-Chat_llamafy), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4172|± |0.0144|
| | |acc_norm|0.4488|± |0.0145|
|truthfulqa_mc| 1|mc1 |0.3501|± |0.0167|
| | |mc2 |0.5164|± |0.0159|
|winogrande | 0|acc |0.6306|± |0.0136|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Minami-su__Qwen1.5-7B-Chat_llamafy)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.00|
|AI2 Reasoning Challenge (25-Shot)|57.59|
|HellaSwag (10-Shot) |78.52|
|MMLU (5-Shot) |61.18|
|TruthfulQA (0-shot) |57.59|
|Winogrande (5-shot) |66.46|
|GSM8k (5-shot) |14.63|