Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Vijayendra
/
llama3-8b-lora-cyclic-attention
like
0
PEFT
PyTorch
Safetensors
llama
4-bit precision
bitsandbytes
arxiv:
1910.09700
Model card
Files
Files and versions
Community
Train
Use this model
main
llama3-8b-lora-cyclic-attention
/
tokenizer.json
Vijayendra
Upload fine-tuned LoRA model with cyclic attention
22876b4
verified
25 days ago
raw
Copy download link
history
contribute
delete
Safe
9.09 MB
File too large to display, you can
check the raw version
instead.