T4 - bfloat 16 not support

#2
by SylvainV - opened

Hi,
Thanks for your work.
I'd like to know if you plan to release an other version of the model in float16 instead of bfloat16.

Best regards,

+1, I think this can be supported easily by adding a device argument to the chat method call, which is then used here: https://huggingface.co/ucaslcl/GOT-OCR2_0/blob/main/modeling_GOT.py#L566

this notebook works on colab but with the L4 gpu not the T4 https://colab.research.google.com/drive/1J_2SyiGvbBt2ohg_aoMyXpc0xjg9LFJg?usp=sharing

T4 support colab , feel free to like if it's worked fine : https://colab.research.google.com/drive/1oMA9u4M_hT5gxd80TptA5ik2xnERCC8F?usp=sharing

T4 support colab , feel free to like if it's worked fine : https://colab.research.google.com/drive/1oMA9u4M_hT5gxd80TptA5ik2xnERCC8F?usp=sharing

@jeanflop Can you please share access to the colab file you shared?

T4 support colab , feel free to like if it's worked fine : https://colab.research.google.com/drive/1oMA9u4M_hT5gxd80TptA5ik2xnERCC8F?usp=sharing

@jeanflop Can you please share access to the colab file you shared?

sure , done

Why not load it in float32 then? It's just a sub-1B model.

This comment has been hidden

Managed to run on CPU in Windows 10
Updated files in archive link

https://drive.google.com/file/d/1cUMwQyWDtk0XUsdYEKbrFlpUIU-Okl8x/view?usp=sharing

Note that the run.py file is located in the model folder.

Screenshot_2.jpg

Sign up or log in to comment