One
imone
AI & ML interests
Reinforcement Learning, Brain-inspired AI
Professional RL(HF) Hyperparameter Tuner
Organizations
imone's activity
MMLU Lower Results Theory
3
#5 opened 6 months ago
by
fblgit
Why is the "measured" benchmark score of Llama-3-8B so low?
1
#6 opened 6 months ago
by
c6sneaky
MATH augmentation correctness
2
#3 opened 6 months ago
by
imone
Answer correctness?
#11 opened 6 months ago
by
imone
License
9
#3 opened 7 months ago
by
mrfakename
Update added_tokens.json
#8 opened 8 months ago
by
vicky4s4s
Consider using an OSI-approved license like Mistral and Phi-2
1
#47 opened 9 months ago
by
imone
Full precision weights
6
#6 opened 10 months ago
by
imone
Which model is your demo page using?
2
#44 opened 10 months ago
by
wempoo
Freezing Issue with gguf quant
5
#1 opened 10 months ago
by
dillfrescott
Fix context length in config
#117 opened 10 months ago
by
imone
MetaMath QA
1
#9 opened 10 months ago
by
mrfakename
Fine Tuning
1
#8 opened 11 months ago
by
Aditya0097
Prompt template standard
1
#7 opened 11 months ago
by
Hugs4Llamas
Is there a way to get the text embedding?
1
#5 opened 11 months ago
by
EladC
What is the base model of openchat ? Llama /mistral / custom ?
4
#4 opened 11 months ago
by
StephanePop
error in docs
2
#6 opened 11 months ago
by
PsiPi
32k context size?
1
#3 opened 11 months ago
by
paryska99
How did Mixtral make openchat_3.5 worse?
3
#34 opened 11 months ago
by
JJJJJPSYCHIC
Some feedback
1
#33 opened 11 months ago
by
cmp-nct