Now we wait... ππ
Once the 8k version of this model releases nothing will be able to stop us Muahahaha Evil laugh
But seriously A big model like this with 8k tokens would be amazing, this and guanaco 65b are the closest thing competing to gpt 3.5
@rombodawg Info source on the 8k falcon model?
Sorry i wasn't making any claims, Just wishful thinking is all
It is happening TODAY! :D
Already using the new branch of ggllm.cpp and currently trying it out.
See here => https://github.com/cmp-nct/ggllm.cpp/issues/62
To try it out
git pull origin
git checkout 16k-context-upgrade
it's due to be merged sometime over the weekend I reckon but you can try it ahead of time.
just remember to switch back to master when it's merged with git checkout master ;)
Lets gooo
Also as an update i heard you can pretty much force run any model at any context without fine tuning and it will work fine. Accuracy is an issue with some models, but ive tested guanaco 65b at 8k and it works really well. Not sure how well falcon will work though