alpindale commited on
Commit
1161732
1 Parent(s): 98e09aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -8,12 +8,18 @@ language:
8
 
9
  # Landmark Attention LLaMA 33B
10
 
11
- This model has been trained using the PEFT LoRA method using the [Landmark Attention](https://arxiv.org/abs/2305.16300) method over 200 steps. Model will likely be trained further and updated later on.
12
 
13
  ## Usage
14
 
15
- Unlikely to be usable with the popular frontends (e.g. [KoboldAI](https://github.com/henk717/KoboldAI) and [Oobabooga](https://github.com/oobabooga/text-generation-webui)) due to the lack of support for landmark tokens.
 
 
16
 
17
  ## PEFT Checkpoint
18
 
19
- You can likely merge the checkpoint with any other LLaMA-based model (provided they're 33B, of course). This repo contains the merged weights, but you can grab the adapter [here](https://anonfiles.com/F3Pb20wbz7).
 
 
 
 
 
8
 
9
  # Landmark Attention LLaMA 33B
10
 
11
+ This model has been trained using the PEFT LoRA technique with the [Landmark Attention](https://arxiv.org/abs/2305.16300) method over 200 steps. Model will likely be trained further and updated later on.
12
 
13
  ## Usage
14
 
15
+ Requires `trust_remote_code` to be set to `True`. In [oobabooga](https://github.com/oobabooga/text-generation-webui), you can simply add the `--trust_remote_code` flag.
16
+
17
+ You will also need to disable the `Add the bos_token to the beginning of prompts` option in the settings.
18
 
19
  ## PEFT Checkpoint
20
 
21
+ You can probably merge the checkpoint with any other LLaMA-based model (provided they're 33B, of course). This repo contains the merged weights, but you can grab the adapter [here](https://anonfiles.com/F3Pb20wbz7).
22
+
23
+ ## Training Code
24
+
25
+ You can find the training code [here](https://github.com/eugenepentland/landmark-attention-qlora).