Wording in the code example
#2
by
multimodalart
HF staff
- opened
No description provided.
The sampling needs to be currently wrapped in with torch.cuda.amp.autocast(dtype=dtype):
for bfloat16 to work
Amazing, thanks!!!
dome272
changed pull request status to
merged