File size: 1,342 Bytes
f49dd57 621a691 a5ff545 d9d7515 f49dd57 1cbe605 f49dd57 1cbe605 35a32fb f49dd57 271378d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
language:
- zh
- en
---
# ChatTruth-7B
**ChatTruth-7B** 在Qwen-VL的基础上,使用精心设计的数据进行了优化训练。与Qwen-VL相比,模型在大分辨率上得到了大幅提升。创新性提出Restore Module使大分辨率计算量大幅减少。
![image/png](https://cdn-uploads.huggingface.co/production/uploads/657bef8a5c6f0b1f36fcf28e/kwgU2AxZbJzxmgWULwv6A.png)
## 安装要求 (Requirements)
* transformers 4.32.0
* python 3.8 and above
* pytorch 1.13 and above
* CUDA 11.4 and above
<br>
## 快速开始 (Quickstart)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
model_path = 'ChatTruth-7B' # your downloaded model path.
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# use cuda device
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="cuda", trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True)
model.generation_config.top_p = 0.01
query = tokenizer.from_list_format([
{'image': 'demo.jpeg'},
{'text': '图片中的文字是什么'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# 昆明太厉害了
``` |