gave a bibtex citation
#133 opened about 1 month ago
by
squidWorm
Phi 2 Instruct: an instruction following Phi 2 SLM that has undergone SFT and DPO
#132 opened 3 months ago
by
rasyosef
Update README.md
#131 opened 3 months ago
by
Yaserrati
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2") model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2")
#130 opened 4 months ago
by
Khaliq88
Tokenizer.model is not present in Files and versions
#129 opened 5 months ago
by
naina1S
max length control
#127 opened 6 months ago
by
Ashvithaa
Minumum gpu ram to fine tune phi-2 on custom data and what should be the datarows to fine tune phi-2 model.
#126 opened 7 months ago
by
Ayushmzn
Model inference speed is too slow (positively related to max_new_tokens length)
#125 opened 7 months ago
by
Kyrieeee
inference Endpoints error
#124 opened 7 months ago
by
gawon16
Generation after finetuning does not ends at EOS token
1
#123 opened 7 months ago
by
zokica
Trouble with Phi2 Tokenisation
1
#116 opened 8 months ago
by
riedgar-ms
Target modules {'out_proj', 'Wqkv'} is not found in the phi-2 model how can I fix this error?
2
#115 opened 8 months ago
by
roy1109
Fine tuned phi2 model loses context once loaded from local
1
#112 opened 9 months ago
by
tatvamasi
Disabled autocast
9
#109 opened 9 months ago
by
miguelcarv
Model not loading from local directory
1
#108 opened 9 months ago
by
mdaniyal214
Phi-2 prompt engineering
1
#107 opened 9 months ago
by
setareh1
Training model breaks with flash_attn_2. Error "NameError: name 'index_first_axis' is not defined"
12
#105 opened 9 months ago
by
praveeny
I encountered an error when attempting to train phi
#104 opened 9 months ago
by
PhelixZhen
instruct fone tune phi2 on custom data
1
#103 opened 9 months ago
by
akashAD
How to finetune Phi2 using RoPE and QLoRA for long text summary generation?
4
#102 opened 9 months ago
by
parikshit1619
supervised finetuning error
#96 opened 10 months ago
by
AlexMercerXX
How to generate a gguf file from a fine tuned phi-2 ?
1
#93 opened 10 months ago
by
fivetech
Loading a qlora finetuned checkpoint with the new updates outputs differently
6
#88 opened 10 months ago
by
YaYaGeGe
How to reduce nonsense generation?
9
#81 opened 10 months ago
by
Kelmeilia
Cannot load the model PhiForSequenceClassificationModified(basemodel.config)
3
#80 opened 10 months ago
by
lyliiiii
Irrelevant text generation while prompting
5
#63 opened 10 months ago
by
ad6398
run Phi-2 on your CPU
12
#62 opened 10 months ago
by
J22
Responses by phi-2 are off, it's depressed and insults me for no reason whatsover?
7
#61 opened 10 months ago
by
NightcoreSkies
"Instruct: <prompt>\nOutput:" or "Instruction: <prompt>\nOutput:"
5
#60 opened 10 months ago
by
J22
What is the best way for the inference process in LORA in PEFT approach
#53 opened 11 months ago
by
Pradeep1995
[Positivity] What is it good for?
1
#52 opened 11 months ago
by
dagbs
Review on Phi-2
4
#45 opened 11 months ago
by
tusharpaul
Can inference be done with the model converted to OpenVINO?
2
#44 opened 11 months ago
by
DeltaLux
Unimpressed by Phi-2
6
#39 opened 11 months ago
by
Aditiyadav
How to fine-tune this? + Training code
43
#19 opened 11 months ago
by
cekal