llama-3-8b-slow-DUS-random-layer-method2
llama-3-8b-slow-DUS-random-layer-method2 is a merge of the following models using LazyMergekit:
- ryan0712/llama-3-8b-slow-DUS-random-layer1-method2
- ryan0712/llama-3-8b-slow-DUS-random-layer2-method2
𧩠Configuration
slices:
- sources:
- model: ryan0712/llama-3-8b-slow-DUS-random-layer1-method2
layer_range: [0, 16]
- model: ryan0712/llama-3-8b-slow-DUS-random-layer2-method2
layer_range: [0, 16]
merge_method: slerp
base_model: ryan0712/llama-3-8b-slow-DUS-random-layer1-method2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ryan0712/llama-3-8b-slow-DUS-random-layer-method2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ryan0712/llama-3-8b-slow-DUS-random-layer-method2
Merge model
this model