run the model on a windows machine without GPU / 在CPU上跑13B
#19
by
diagaiwei
- opened
Hi I adapted the llama.cpp to run Baichuan13B. Now you can run it smoothly on machines with around 12GB RAM without GPU. https://github.com/ouwei2013/baichuan13b.cpp.git The usage is basically the same as llama.cpp.
我改了下llama.cpp的代码,现在在没有gpu的机器上也能跑这个模型了,只要机器内存大于12G应该是可以跑起来的。
GradientGuru
changed discussion status to
closed
llama.cpp跑这个模型加速比有多少,有没有试过fastllm?