kcoopermiller commited on
Commit
384c304
1 Parent(s): c712179

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -123,12 +123,12 @@ Quantized using Huggingface's [candle](https://github.com/huggingface/candle) fr
123
  ## How to use with Candle quantized T5 example
124
  Visit the [candle T5 example](https://github.com/huggingface/candle/tree/main/candle-examples/examples/quantized-t5) for more detailed instruction
125
 
126
- Clone candle repo:
127
  ```bash
128
  git clone https://github.com/huggingface/candle.git
129
  cd candle/candle-examples
130
  ```
131
- Run the following command:
132
  ```bash
133
  cargo run --example quantized-t5 --release -- \
134
  --model-id "kcoopermiller/aya-101-GGUF" \
@@ -138,8 +138,6 @@ cargo run --example quantized-t5 --release -- \
138
  --temperature 0
139
  ```
140
 
141
- Note: this runs on CPU
142
-
143
  Available weight files:
144
  - aya-101.Q2_K.gguf
145
  - aya-101.Q3_K.gguf
 
123
  ## How to use with Candle quantized T5 example
124
  Visit the [candle T5 example](https://github.com/huggingface/candle/tree/main/candle-examples/examples/quantized-t5) for more detailed instruction
125
 
126
+ 1. Clone candle repo:
127
  ```bash
128
  git clone https://github.com/huggingface/candle.git
129
  cd candle/candle-examples
130
  ```
131
+ 2. Run the following command:
132
  ```bash
133
  cargo run --example quantized-t5 --release -- \
134
  --model-id "kcoopermiller/aya-101-GGUF" \
 
138
  --temperature 0
139
  ```
140
 
 
 
141
  Available weight files:
142
  - aya-101.Q2_K.gguf
143
  - aya-101.Q3_K.gguf