win28703 commited on
Commit
61bb586
1 Parent(s): 0cc0759

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ language:
4
+ - en
5
+ inference: false
6
+ fine-tuning: false
7
+ tags:
8
+ - nvidia
9
+ - llama3.1
10
+ - mlx
11
+ datasets:
12
+ - nvidia/HelpSteer2
13
+ base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
14
+ pipeline_tag: text-generation
15
+ library_name: transformers
16
+ ---
17
+
18
+ # win28703/Llama-3.1-Nemotron-70B-Instruct-HF-Q8-mlx
19
+
20
+ The Model [win28703/Llama-3.1-Nemotron-70B-Instruct-HF-Q8-mlx](https://huggingface.co/win28703/Llama-3.1-Nemotron-70B-Instruct-HF-Q8-mlx) was converted to MLX format from [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) using mlx-lm version **0.19.1**.
21
+
22
+ ## Use with mlx
23
+
24
+ ```bash
25
+ pip install mlx-lm
26
+ ```
27
+
28
+ ```python
29
+ from mlx_lm import load, generate
30
+
31
+ model, tokenizer = load("win28703/Llama-3.1-Nemotron-70B-Instruct-HF-Q8-mlx")
32
+
33
+ prompt="hello"
34
+
35
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
36
+ messages = [{"role": "user", "content": prompt}]
37
+ prompt = tokenizer.apply_chat_template(
38
+ messages, tokenize=False, add_generation_prompt=True
39
+ )
40
+
41
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
42
+ ```