EDIT: All quants work if you pass "--no-warmup" parameter.
RachidAR
RachidAR
AI & ML interests
1.58 bit LLM
Organizations
Collections
4
-
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 590 -
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Paper • 2403.03507 • Published • 182 -
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 52 -
ResLoRA: Identity Residual Mapping in Low-Rank Adaption
Paper • 2402.18039 • Published • 11
models
25
RachidAR/Qwen2.5-Coder-1.5B-Q5_K_M-GGUF
Text Generation
•
Updated
RachidAR/Mistral-Small-Instruct-2409-Q4_K_M-GGUF
Updated
•
17
RachidAR/RWKV-v6-Finch-14B-HF-Q5_K_M-GGUF
Updated
•
24
RachidAR/RWKV-v6-Finch-7B-HF-Q5_K_M-GGUF
Updated
•
23
RachidAR/RWKV-v6-Finch-1B6-HF-Q5_K_M-GGUF
Updated
•
110
•
1
RachidAR/Phi-3.5-mini-instruct-Q5_K_M-GGUF
Text Generation
•
Updated
•
19
RachidAR/Phi-3-mini-4k-ins-June2024-Q5_K_M-imat-GGUF
Text Generation
•
Updated
•
13
RachidAR/Phi-3-mini-4k-instruct-June2024-Q6_K-GGUF
Text Generation
•
Updated
•
23
RachidAR/saiga_llama3_8b-Q6_K-GGUF
Updated
•
7
RachidAR/Llama-3-8B-Instruct-DPO-v0.3-Q6_K-GGUF
Text Generation
•
Updated
•
11
datasets
None public yet