I have some proof
#1
by
bhuwansaik
- opened
Proof of what exactly? It's pretty obvious we didn't use their model. Our v1 model (released before Wizardcoder btw) was trained on a Wizardcoder-style dataset that we made ourselves and this was the internal nomenclature for the model.
Even based on this screenshot it wouldn't make any sense that we used their model because they have nothing about checkpoint information.
Again, we did not use anything from Wizardcoder and I want to make sure that we are extremely clear about this. It's obvious if you use our model that it is completely different -- theirs is derived from CodeLlama-34B-Python while this model is derived from CodeLlama-34B.
michaelroyzen
changed discussion status to
closed