TheBloke commited on
Commit
6e472f0
1 Parent(s): 7ab2d9a

Initial GGUF model commit

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -204,6 +204,10 @@ This is Transformers/HF format fp16 weights for CodeLlama 7B-Python. It is the
204
 
205
  Quantisations will be coming shortly.
206
 
 
 
 
 
207
  ## Prompt template: TBC
208
 
209
 
 
204
 
205
  Quantisations will be coming shortly.
206
 
207
+ Please note that due to a change in the RoPE Theta value, for correct results you must load these FP16 models with `trust_remote_code=True`
208
+
209
+ Credit to @emozilla for creating the necessary modelling code to achieve this!
210
+
211
  ## Prompt template: TBC
212
 
213