happyme531
commited on
Commit
•
63ec5f9
1
Parent(s):
86d5a1c
Upload 23 files
Browse files- .gitattributes +3 -0
- README.md +140 -0
- added_tokens.json +25 -0
- config.json +27 -0
- generation_config.json +7 -0
- librkllmrt.so +3 -0
- merges.txt +0 -0
- model.safetensors.index.json +796 -0
- patched_modeling_navit_siglip.py +893 -0
- patched_resampler.py +771 -0
- qwen.rkllm +3 -0
- rename_tensors.py +46 -0
- rkllm-convert.py +23 -0
- rkllm_binding.py +227 -0
- run_rknn.py +121 -0
- special_tokens_map.json +172 -0
- test.jpg +0 -0
- tokenization_minicpmv_fast.py +66 -0
- tokenizer.json +0 -0
- tokenizer_config.json +235 -0
- vision_convert_rknn.py +87 -0
- vision_export_onnx.py +53 -0
- vision_transformer.rknn +3 -0
- vocab.json +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
librkllmrt.so filter=lfs diff=lfs merge=lfs -text
|
37 |
+
qwen.rkllm filter=lfs diff=lfs merge=lfs -text
|
38 |
+
vision_transformer.rknn filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
注意: 由于疑似RKLLM那边的问题, 目前此模型的推理输出结果不正常 (https://github.com/airockchip/rknn-llm/issues/101), 未来修复后这个repo会更新.
|
2 |
+
|
3 |
+
NOTE: Due to suspected issues in RKLLM(https://github.com/airockchip/rknn-llm/issues/101) , the model cannot be used normally for inference at the moment. Once fixed, this repo will be updated.
|
4 |
+
|
5 |
+
# MiniCPM-V-2_6-rkllm
|
6 |
+
|
7 |
+
## (English README see below)
|
8 |
+
|
9 |
+
在RK3588上运行强大的MiniCPM-V-2.6 视觉大模型!
|
10 |
+
|
11 |
+
- 推理速度(RK3588): 视觉编码器 4.8s(单核) + LLM 填充 2.2s (92 tokens / 42.5 tps) + 解码 3.25 tps
|
12 |
+
- 内存占用(RK3588, 默认上下文长度): 视觉编码器 1.9GB + LLM 7.8GB = 9.7GB
|
13 |
+
|
14 |
+
## 使用方法
|
15 |
+
|
16 |
+
1. 克隆或者下载此仓库到本地. 模型较大, 请确保有足够的磁盘空间.
|
17 |
+
|
18 |
+
2. 开发板的RKNPU2内核驱动版本必须>=0.9.6才能运行这么大的模型.
|
19 |
+
使用root权限运行以下命令检查驱动版本:
|
20 |
+
```bash
|
21 |
+
> cat /sys/kernel/debug/rknpu/version
|
22 |
+
RKNPU driver: v0.9.8
|
23 |
+
```
|
24 |
+
如果版本过低, 请更新驱动. 你可能需要更新内核, 或查找官方文档以获取帮助.
|
25 |
+
|
26 |
+
3. 安装依赖
|
27 |
+
|
28 |
+
```bash
|
29 |
+
pip install numpy<2 opencv-python
|
30 |
+
```
|
31 |
+
你还需要手动安装rknn-toolkit2-lite2.
|
32 |
+
|
33 |
+
4. 运行
|
34 |
+
|
35 |
+
```bash
|
36 |
+
python run_rknn.py
|
37 |
+
```
|
38 |
+
|
39 |
+
你可以修改`run_rknn.py`中的内容来测试不同的输入.
|
40 |
+
|
41 |
+
## 模型转换
|
42 |
+
|
43 |
+
#### 准备工作
|
44 |
+
|
45 |
+
1. 安装rknn-toolkit2 v2.1.0或更高版本, 以及rkllm-toolkit v1.1.0或更高版本.
|
46 |
+
2. 下载此仓库到本地, 但不需要下载`.rkllm`和`.rknn`结尾的模型文件.
|
47 |
+
3. 下载MiniCPM-V-2.6的huggingface模型仓库到本地. (https://huggingface.co/openbmb/MiniCPM-V-2_6)
|
48 |
+
|
49 |
+
#### 转换LLM
|
50 |
+
|
51 |
+
1. 将此仓库中的`rename_tensors.py`文件复制到MiniCPM-V-2.6的huggingface模型仓库根目录并运行. 稍等片刻, 会生成`model-renamed-00001-of-00004.safetensors`等4个safetensors文件和一个json文件.
|
52 |
+
2. 不用管那个json文件, 将那4个safetensors文件移动到此仓库根目录下.
|
53 |
+
3. 执行`rkllm-convert.py`. 等一会, 会生成`qwen.rkllm`, 就是转换后的模型.
|
54 |
+
|
55 |
+
#### 转换视觉编码器
|
56 |
+
|
57 |
+
1. 将此仓库中的`patched_modeling_navit_siglip.py`和`patched_resampler.py`复制到MiniCPM-V-2.6的huggingface模型仓库根目录下, 重命名为`modeling_navit_siglip.py`和`resampler.py`, 替换掉原来的文件.
|
58 |
+
|
59 |
+
2. 打开`vision_export_onnx.py`, 修改其中的`MODEL_PATH`为MiniCPM-V-2.6模型文件夹的路径. 然后执行. 等一会, 会生成`vision_encoder.onnx`.
|
60 |
+
3. 执行`vision_convert_rknn.py`. 等一会, 会生成`vision_encoder.rknn`, 这就是转换后的视觉编码器.
|
61 |
+
|
62 |
+
## 已知问题
|
63 |
+
|
64 |
+
- 由于疑似RKLLM中存在的问题, 目前此模型无法正常推理.
|
65 |
+
- 由于RKLLM中存在的问题, 目前视觉编码器和LLM无法同时被加载, 必须先卸载掉视觉编码器, 再重新加载LLM. 如果要推理多次, 必须重复执行卸载和加载操作, 速度非常慢.
|
66 |
+
- 视觉编码器转换ONNX的代码取自 https://github.com/sophgo/LLM-TPU/tree/main/models/MiniCPM-V-2_6 , 感谢Sophgo提供的代码. 但是这个转换方法似乎将原模型中的自适应图像分块算法删除了, 可能会导致精度下降.
|
67 |
+
|
68 |
+
## 参考
|
69 |
+
|
70 |
+
[sophgo/LLM-TPU models/MiniCPM-V-2_6](https://github.com/sophgo/LLM-TPU/tree/main/models/MiniCPM-V-2_6)
|
71 |
+
[openbmb/MiniCPM-V-2_6](https://huggingface.co/openbmb/MiniCPM-V-2_6)
|
72 |
+
[Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B)
|
73 |
+
|
74 |
+
|
75 |
+
## English README
|
76 |
+
|
77 |
+
Run the Powerful MiniCPM-V-2.6 Visual Language Model on RK3588!
|
78 |
+
|
79 |
+
- Inference speed (RK3588): Visual encoder 4.8s (single core) + LLM filling 2.2s (92 tokens / 42.5 tps) + decoding 3.25 tps
|
80 |
+
- Memory usage (RK3588, default context length): Visual encoder 1.9GB + LLM 7.8GB = 9.7GB
|
81 |
+
|
82 |
+
## Usage
|
83 |
+
|
84 |
+
1. Clone or download this repository locally. The model is large, so make sure you have enough disk space.
|
85 |
+
|
86 |
+
2. The RKNPU2 kernel driver version on the development board must be >=0.9.6 to run such a large model.
|
87 |
+
Use the following command with root privileges to check the driver version:
|
88 |
+
```bash
|
89 |
+
> cat /sys/kernel/debug/rknpu/version
|
90 |
+
RKNPU driver: v0.9.8
|
91 |
+
```
|
92 |
+
If the version is too low, please update the driver. You may need to update the kernel or refer to official documentation for help.
|
93 |
+
|
94 |
+
3. Install dependencies
|
95 |
+
|
96 |
+
```bash
|
97 |
+
pip install numpy<2 opencv-python
|
98 |
+
```
|
99 |
+
You also need to manually install rknn-toolkit2-lite2.
|
100 |
+
|
101 |
+
4. Run
|
102 |
+
|
103 |
+
```bash
|
104 |
+
python run_rknn.py
|
105 |
+
```
|
106 |
+
|
107 |
+
You can modify the content in `run_rknn.py` to test different inputs.
|
108 |
+
|
109 |
+
## Model Conversion
|
110 |
+
|
111 |
+
#### Preparation
|
112 |
+
|
113 |
+
1. Install rknn-toolkit2 v2.1.0 or higher, and rkllm-toolkit v1.1.0 or higher.
|
114 |
+
2. Download this repository locally, but you don't need to download the model files ending with `.rkllm` and `.rknn`.
|
115 |
+
3. Download the MiniCPM-V-2.6 Hugging Face model repository locally. (https://huggingface.co/openbmb/MiniCPM-V-2_6)
|
116 |
+
|
117 |
+
#### Converting LLM
|
118 |
+
|
119 |
+
1. Copy the `rename_tensors.py` file from this repository to the root directory of the MiniCPM-V-2.6 Hugging Face model repository and run it. Wait for a moment, it will generate 4 safetensors files like `model-renamed-00001-of-00004.safetensors` and a json file.
|
120 |
+
2. Ignore the json file, move those 4 safetensors files to the root directory of this repository.
|
121 |
+
3. Execute `rkllm-convert.py`. After a while, it will generate `qwen.rkllm`, which is the converted model.
|
122 |
+
|
123 |
+
#### Converting Visual Encoder
|
124 |
+
|
125 |
+
1. Copy `patched_modeling_navit_siglip.py` and `patched_resampler.py` from this repository to the root directory of the MiniCPM-V-2.6 Hugging Face model repository, rename them to `modeling_navit_siglip.py` and `resampler.py`, replacing the original files.
|
126 |
+
|
127 |
+
2. Open `vision_export_onnx.py`, modify the `MODEL_PATH` to the path of the MiniCPM-V-2.6 model folder. Then execute it. After a while, it will generate `vision_encoder.onnx`.
|
128 |
+
3. Execute `vision_convert_rknn.py`. After a while, it will generate `vision_encoder.rknn`, which is the converted visual encoder.
|
129 |
+
|
130 |
+
## Known Issues
|
131 |
+
|
132 |
+
- Due to a suspected issue in RKLLM, this model currently cannot perform inference normally.
|
133 |
+
- Due to an issue in RKLLM, the visual encoder and LLM cannot be loaded simultaneously at present. The visual encoder must be unloaded first, then the LLM reloaded. If multiple inferences are required, the unloading and loading operations must be repeated, which is very slow.
|
134 |
+
- The code for converting the visual encoder to ONNX is taken from https://github.com/sophgo/LLM-TPU/tree/main/models/MiniCPM-V-2_6, thanks to Sophgo for providing the code. However, this conversion method seems to have removed the adaptive image partitioning algorithm from the original model, which may lead to a decrease in accuracy.
|
135 |
+
|
136 |
+
## References
|
137 |
+
|
138 |
+
[sophgo/LLM-TPU models/MiniCPM-V-2_6](https://github.com/sophgo/LLM-TPU/tree/main/models/MiniCPM-V-2_6)
|
139 |
+
[openbmb/MiniCPM-V-2_6](https://huggingface.co/openbmb/MiniCPM-V-2_6)
|
140 |
+
[Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B)
|
added_tokens.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"</box>": 151651,
|
3 |
+
"</image>": 151647,
|
4 |
+
"</image_id>": 151659,
|
5 |
+
"</point>": 151655,
|
6 |
+
"</quad>": 151653,
|
7 |
+
"</ref>": 151649,
|
8 |
+
"</slice>": 151657,
|
9 |
+
"<box>": 151650,
|
10 |
+
"".encode()
|
84 |
+
param.img_content = "<unk>".encode()
|
85 |
+
extend_param = RKLLMExtendParam()
|
86 |
+
extend_param.base_domain_id = 0 # iommu domain 0 for vision encoder
|
87 |
+
param.extend_param = extend_param
|
88 |
+
model_size = os.path.getsize(MODEL_PATH)
|
89 |
+
print(f"Start loading language model (size: {model_size / 1024 / 1024:.2f} MB)")
|
90 |
+
start_time = time.time()
|
91 |
+
handle = init(param, result_callback)
|
92 |
+
end_time = time.time()
|
93 |
+
print(f"Language model loaded in {end_time - start_time:.2f} seconds (speed: {model_size / (end_time - start_time) / 1024 / 1024:.2f} MB/s)")
|
94 |
+
|
95 |
+
# Create input
|
96 |
+
prompt = """<|im_start|>system
|
97 |
+
You are a helpful assistant.<|im_end|>
|
98 |
+
<|im_start|>user
|
99 |
+
",
|
12 |
+
"lstrip": false,
|
13 |
+
"normalized": false,
|
14 |
+
"rstrip": false,
|
15 |
+
"single_word": false
|
16 |
+
},
|
17 |
+
{
|
18 |
+
"content": "<ref>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": false,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
},
|
24 |
+
{
|
25 |
+
"content": "</ref>",
|
26 |
+
"lstrip": false,
|
27 |
+
"normalized": false,
|
28 |
+
"rstrip": false,
|
29 |
+
"single_word": false
|
30 |
+
},
|
31 |
+
{
|
32 |
+
"content": "<box>",
|
33 |
+
"lstrip": false,
|
34 |
+
"normalized": false,
|
35 |
+
"rstrip": false,
|
36 |
+
"single_word": false
|
37 |
+
},
|
38 |
+
{
|
39 |
+
"content": "</box>",
|
40 |
+
"lstrip": false,
|
41 |
+
"normalized": false,
|
42 |
+
"rstrip": false,
|
43 |
+
"single_word": false
|
44 |
+
},
|
45 |
+
{
|
46 |
+
"content": "<quad>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": false,
|
50 |
+
"single_word": false
|
51 |
+
},
|
52 |
+
{
|
53 |
+
"content": "</quad>",
|
54 |
+
"lstrip": false,
|
55 |
+
"normalized": false,
|
56 |
+
"rstrip": false,
|
57 |
+
"single_word": false
|
58 |
+
},
|
59 |
+
{
|
60 |
+
"content": "<point>",
|
61 |
+
"lstrip": false,
|
62 |
+
"normalized": false,
|
63 |
+
"rstrip": false,
|
64 |
+
"single_word": false
|
65 |
+
},
|
66 |
+
{
|
67 |
+
"content": "</point>",
|
68 |
+
"lstrip": false,
|
69 |
+
"normalized": false,
|
70 |
+
"rstrip": false,
|
71 |
+
"single_word": false
|
72 |
+
},
|
73 |
+
{
|
74 |
+
"content": "<slice>",
|
75 |
+
"lstrip": false,
|
76 |
+
"normalized": false,
|
77 |
+
"rstrip": false,
|
78 |
+
"single_word": false
|
79 |
+
},
|
80 |
+
{
|
81 |
+
"content": "</slice>",
|
82 |
+
"lstrip": false,
|
83 |
+
"normalized": false,
|
84 |
+
"rstrip": false,
|
85 |
+
"single_word": false
|
86 |
+
},
|
87 |
+
{
|
88 |
+
"content": "<image_id>",
|
89 |
+
"lstrip": false,
|
90 |
+
"normalized": false,
|
91 |
+
"rstrip": false,
|
92 |
+
"single_word": false
|
93 |
+
},
|
94 |
+
{
|
95 |
+
"content": "</image_id>",
|
96 |
+
"lstrip": false,
|
97 |
+
"normalized": false,
|
98 |
+
"rstrip": false,
|
99 |
+
"single_word": false
|
100 |
+
},
|
101 |
+
{
|
102 |
+
"content": "<|reserved_special_token_0|>",
|
103 |
+
"lstrip": false,
|
104 |
+
"normalized": false,
|
105 |
+
"rstrip": false,
|
106 |
+
"single_word": false
|
107 |
+
},
|
108 |
+
{
|
109 |
+
"content": "<|reserved_special_token_1|>",
|
110 |
+
"lstrip": false,
|
111 |
+
"normalized": false,
|
112 |
+
"rstrip": false,
|
113 |
+
"single_word": false
|
114 |
+
},
|
115 |
+
{
|
116 |
+
"content": "<|reserved_special_token_2|>",
|
117 |
+
"lstrip": false,
|
118 |
+
"normalized": false,
|
119 |
+
"rstrip": false,
|
120 |
+
"single_word": false
|
121 |
+
},
|
122 |
+
{
|
123 |
+
"content": "<|reserved_special_token_3|>",
|
124 |
+
"lstrip": false,
|
125 |
+
"normalized": false,
|
126 |
+
"rstrip": false,
|
127 |
+
"single_word": false
|
128 |
+
},
|
129 |
+
{
|
130 |
+
"content": "<|reserved_special_token_4|>",
|
131 |
+
"lstrip": false,
|
132 |
+
"normalized": false,
|
133 |
+
"rstrip": false,
|
134 |
+
"single_word": false
|
135 |
+
},
|
136 |
+
{
|
137 |
+
"content": "<|reserved_special_token_5|>",
|
138 |
+
"lstrip": false,
|
139 |
+
"normalized": false,
|
140 |
+
"rstrip": false,
|
141 |
+
"single_word": false
|
142 |
+
}
|
143 |
+
],
|
144 |
+
"bos_token": {
|
145 |
+
"content": "<|im_start|>",
|
146 |
+
"lstrip": false,
|
147 |
+
"normalized": false,
|
148 |
+
"rstrip": false,
|
149 |
+
"single_word": false
|
150 |
+
},
|
151 |
+
"eos_token": {
|
152 |
+
"content": "<|im_end|>",
|
153 |
+
"lstrip": false,
|
154 |
+
"normalized": false,
|
155 |
+
"rstrip": false,
|
156 |
+
"single_word": false
|
157 |
+
},
|
158 |
+
"pad_token": {
|
159 |
+
"content": "<|endoftext|>",
|
160 |
+
"lstrip": false,
|
161 |
+
"normalized": false,
|
162 |
+
"rstrip": false,
|
163 |
+
"single_word": false
|
164 |
+
},
|
165 |
+
"unk_token": {
|
166 |
+
"content": "<unk>",
|
167 |
+
"lstrip": false,
|
168 |
+
"normalized": false,
|
169 |
+
"rstrip": false,
|
170 |
+
"single_word": false
|
171 |
+
}
|
172 |
+
}
|
test.jpg
ADDED
tokenization_minicpmv_fast.py
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from transformers.models.qwen2 import Qwen2TokenizerFast
|
2 |
+
|
3 |
+
|
4 |
+
class MiniCPMVTokenizerFast(Qwen2TokenizerFast):
|
5 |
+
def __init__(self, **kwargs):
|
6 |
+
super().__init__(**kwargs)
|
7 |
+
self.im_start = ""
|
9 |
+
self.ref_start = "<ref>"
|
10 |
+
self.ref_end = "</ref>"
|
11 |
+
self.box_start = "<box>"
|
12 |
+
self.box_end = "</box>"
|
13 |
+
self.quad_start = "<quad>"
|
14 |
+
self.quad_end = "</quad>"
|
15 |
+
self.slice_start = "<slice>"
|
16 |
+
self.slice_end = "</slice>"
|
17 |
+
self.im_id_start = "<image_id>"
|
18 |
+
self.im_id_end = "</image_id>"
|
19 |
+
|
20 |
+
@property
|
21 |
+
def eos_id(self):
|
22 |
+
return self.eos_token_id
|
23 |
+
|
24 |
+
@property
|
25 |
+
def bos_id(self):
|
26 |
+
return self.bos_token_id
|
27 |
+
|
28 |
+
@property
|
29 |
+
def unk_id(self):
|
30 |
+
return self.unk_token_id
|
31 |
+
|
32 |
+
@property
|
33 |
+
def im_start_id(self):
|
34 |
+
return self.convert_tokens_to_ids(self.im_start)
|
35 |
+
|
36 |
+
@property
|
37 |
+
def im_end_id(self):
|
38 |
+
return self.convert_tokens_to_ids(self.im_end)
|
39 |
+
|
40 |
+
@property
|
41 |
+
def slice_start_id(self):
|
42 |
+
return self.convert_tokens_to_ids(self.slice_start)
|
43 |
+
|
44 |
+
@property
|
45 |
+
def slice_end_id(self):
|
46 |
+
return self.convert_tokens_to_ids(self.slice_end)
|
47 |
+
|
48 |
+
@property
|
49 |
+
def im_id_start_id(self):
|
50 |
+
return self.convert_tokens_to_ids(self.im_id_start)
|
51 |
+
|
52 |
+
@property
|
53 |
+
def im_id_end_id(self):
|
54 |
+
return self.convert_tokens_to_ids(self.im_id_end)
|
55 |
+
|
56 |
+
@property
|
57 |
+
def newline_id(self):
|
58 |
+
return self.convert_tokens_to_ids('\n')
|
59 |
+
|
60 |
+
@staticmethod
|
61 |
+
def escape(text: str) -> str:
|
62 |
+
return text
|
63 |
+
|
64 |
+
@staticmethod
|
65 |
+
def unescape(text: str) -> str:
|
66 |
+
return text
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,235 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"added_tokens_decoder": {
|
4 |
+
"128244": {
|
5 |
+
"content": "<unk>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": false,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false,
|
10 |
+
"special": true
|
11 |
+
},
|
12 |
+
"151643": {
|
13 |
+
"content": "<|endoftext|>",
|
14 |
+
"lstrip": false,
|
15 |
+
"normalized": false,
|
16 |
+
"rstrip": false,
|
17 |
+
"single_word": false,
|
18 |
+
"special": true
|
19 |
+
},
|
20 |
+
"151644": {
|
21 |
+
"content": "<|im_start|>",
|
22 |
+
"lstrip": false,
|
23 |
+
"normalized": false,
|
24 |
+
"rstrip": false,
|
25 |
+
"single_word": false,
|
26 |
+
"special": true
|
27 |
+
},
|
28 |
+
"151645": {
|
29 |
+
"content": "<|im_end|>",
|
30 |
+
"lstrip": false,
|
31 |
+
"normalized": false,
|
32 |
+
"rstrip": false,
|
33 |
+
"single_word": false,
|
34 |
+
"special": true
|
35 |
+
},
|
36 |
+
"151646": {
|
37 |
+
"content": "",
|
46 |
+
"lstrip": false,
|
47 |
+
"normalized": false,
|
48 |
+
"rstrip": false,
|
49 |
+
"single_word": false,
|
50 |
+
"special": true
|
51 |
+
},
|
52 |
+
"151648": {
|
53 |
+
"content": "<ref>",
|
54 |
+
"lstrip": false,
|
55 |
+
"normalized": false,
|
56 |
+
"rstrip": false,
|
57 |
+
"single_word": false,
|
58 |
+
"special": true
|
59 |
+
},
|
60 |
+
"151649": {
|
61 |
+
"content": "</ref>",
|
62 |
+
"lstrip": false,
|
63 |
+
"normalized": false,
|
64 |
+
"rstrip": false,
|
65 |
+
"single_word": false,
|
66 |
+
"special": true
|
67 |
+
},
|
68 |
+
"151650": {
|
69 |
+
"content": "<box>",
|
70 |
+
"lstrip": false,
|
71 |
+
"normalized": false,
|
72 |
+
"rstrip": false,
|
73 |
+
"single_word": false,
|
74 |
+
"special": true
|
75 |
+
},
|
76 |
+
"151651": {
|
77 |
+
"content": "</box>",
|
78 |
+
"lstrip": false,
|
79 |
+
"normalized": false,
|
80 |
+
"rstrip": false,
|
81 |
+
"single_word": false,
|
82 |
+
"special": true
|
83 |
+
},
|
84 |
+
"151652": {
|
85 |
+
"content": "<quad>",
|
86 |
+
"lstrip": false,
|
87 |
+
"normalized": false,
|
88 |
+
"rstrip": false,
|
89 |
+
"single_word": false,
|
90 |
+
"special": true
|
91 |
+
},
|
92 |
+
"151653": {
|
93 |
+
"content": "</quad>",
|
94 |
+
"lstrip": false,
|
95 |
+
"normalized": false,
|
96 |
+
"rstrip": false,
|
97 |
+
"single_word": false,
|
98 |
+
"special": true
|
99 |
+
},
|
100 |
+
"151654": {
|
101 |
+
"content": "<point>",
|
102 |
+
"lstrip": false,
|
103 |
+
"normalized": false,
|
104 |
+
"rstrip": false,
|
105 |
+
"single_word": false,
|
106 |
+
"special": true
|
107 |
+
},
|
108 |
+
"151655": {
|
109 |
+
"content": "</point>",
|
110 |
+
"lstrip": false,
|
111 |
+
"normalized": false,
|
112 |
+
"rstrip": false,
|
113 |
+
"single_word": false,
|
114 |
+
"special": true
|
115 |
+
},
|
116 |
+
"151656": {
|
117 |
+
"content": "<slice>",
|
118 |
+
"lstrip": false,
|
119 |
+
"normalized": false,
|
120 |
+
"rstrip": false,
|
121 |
+
"single_word": false,
|
122 |
+
"special": true
|
123 |
+
},
|
124 |
+
"151657": {
|
125 |
+
"content": "</slice>",
|
126 |
+
"lstrip": false,
|
127 |
+
"normalized": false,
|
128 |
+
"rstrip": false,
|
129 |
+
"single_word": false,
|
130 |
+
"special": true
|
131 |
+
},
|
132 |
+
"151658": {
|
133 |
+
"content": "<image_id>",
|
134 |
+
"lstrip": false,
|
135 |
+
"normalized": false,
|
136 |
+
"rstrip": false,
|
137 |
+
"single_word": false,
|
138 |
+
"special": true
|
139 |
+
},
|
140 |
+
"151659": {
|
141 |
+
"content": "</image_id>",
|
142 |
+
"lstrip": false,
|
143 |
+
"normalized": false,
|
144 |
+
"rstrip": false,
|
145 |
+
"single_word": false,
|
146 |
+
"special": true
|
147 |
+
},
|
148 |
+
"151660": {
|
149 |
+
"content": "<|reserved_special_token_0|>",
|
150 |
+
"lstrip": false,
|
151 |
+
"normalized": false,
|
152 |
+
"rstrip": false,
|
153 |
+
"single_word": false,
|
154 |
+
"special": true
|
155 |
+
},
|
156 |
+
"151661": {
|
157 |
+
"content": "<|reserved_special_token_1|>",
|
158 |
+
"lstrip": false,
|
159 |
+
"normalized": false,
|
160 |
+
"rstrip": false,
|
161 |
+
"single_word": false,
|
162 |
+
"special": true
|
163 |
+
},
|
164 |
+
"151662": {
|
165 |
+
"content": "<|reserved_special_token_2|>",
|
166 |
+
"lstrip": false,
|
167 |
+
"normalized": false,
|
168 |
+
"rstrip": false,
|
169 |
+
"single_word": false,
|
170 |
+
"special": true
|
171 |
+
},
|
172 |
+
"151663": {
|
173 |
+
"content": "<|reserved_special_token_3|>",
|
174 |
+
"lstrip": false,
|
175 |
+
"normalized": false,
|
176 |
+
"rstrip": false,
|
177 |
+
"single_word": false,
|
178 |
+
"special": true
|
179 |
+
},
|
180 |
+
"151664": {
|
181 |
+
"content": "<|reserved_special_token_4|>",
|
182 |
+
"lstrip": false,
|
183 |
+
"normalized": false,
|
184 |
+
"rstrip": false,
|
185 |
+
"single_word": false,
|
186 |
+
"special": true
|
187 |
+
},
|
188 |
+
"151665": {
|
189 |
+
"content": "<|reserved_special_token_5|>",
|
190 |
+
"lstrip": false,
|
191 |
+
"normalized": false,
|
192 |
+
"rstrip": false,
|
193 |
+
"single_word": false,
|
194 |
+
"special": true
|
195 |
+
}
|
196 |
+
},
|
197 |
+
"additional_special_tokens": [
|
198 |
+
"",
|
200 |
+
"<ref>",
|
201 |
+
"</ref>",
|
202 |
+
"<box>",
|
203 |
+
"</box>",
|
204 |
+
"<quad>",
|
205 |
+
"</quad>",
|
206 |
+
"<point>",
|
207 |
+
"</point>",
|
208 |
+
"<slice>",
|
209 |
+
"</slice>",
|
210 |
+
"<image_id>",
|
211 |
+
"</image_id>",
|
212 |
+
"<|reserved_special_token_0|>",
|
213 |
+
"<|reserved_special_token_1|>",
|
214 |
+
"<|reserved_special_token_2|>",
|
215 |
+
"<|reserved_special_token_3|>",
|
216 |
+
"<|reserved_special_token_4|>",
|
217 |
+
"<|reserved_special_token_5|>"
|
218 |
+
],
|
219 |
+
"bos_token": "<|im_start|>",
|
220 |
+
"chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
221 |
+
"clean_up_tokenization_spaces": false,
|
222 |
+
"eos_token": "<|im_end|>",
|
223 |
+
"errors": "replace",
|
224 |
+
"model_max_length": 1000000000000000019884624838656,
|
225 |
+
"pad_token": "<|endoftext|>",
|
226 |
+
"split_special_tokens": false,
|
227 |
+
"auto_map": {
|
228 |
+
"AutoTokenizer": [
|
229 |
+
"tokenization_minicpmv_fast.MiniCPMVTokenizerFast",
|
230 |
+
null
|
231 |
+
]
|
232 |
+
},
|
233 |
+
"tokenizer_class": "MiniCPMVTokenizerFast",
|
234 |
+
"unk_token": "<unk>"
|
235 |
+
}
|
vision_convert_rknn.py
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python
|
2 |
+
# coding: utf-8
|
3 |
+
|
4 |
+
import os
|
5 |
+
from rknn.api import RKNN
|
6 |
+
from sys import exit
|
7 |
+
import argparse
|
8 |
+
import cv2
|
9 |
+
import numpy as np
|
10 |
+
os.chdir(os.path.dirname(os.path.abspath(__file__)))
|
11 |
+
|
12 |
+
image_sizes= [[448, 448]]
|
13 |
+
batch_sizes = [1]
|
14 |
+
|
15 |
+
def convert_encoder():
|
16 |
+
rknn = RKNN(verbose=True)
|
17 |
+
|
18 |
+
ONNX_MODEL=f"vision_transformer.onnx"
|
19 |
+
RKNN_MODEL=ONNX_MODEL.replace(".onnx",".rknn")
|
20 |
+
DATASET="dataset.txt"
|
21 |
+
QUANTIZE=False
|
22 |
+
input_shapes = [[[batch_size, 3, image_size[0], image_size[1]]] for batch_size in batch_sizes for image_size in image_sizes]
|
23 |
+
print(input_shapes)
|
24 |
+
|
25 |
+
# pre-process config
|
26 |
+
print('--> Config model')
|
27 |
+
rknn.config(quantized_algorithm='normal', quantized_method='channel', target_platform='rk3588', optimization_level=3,
|
28 |
+
mean_values=[128, 128, 128], std_values=[128, 128, 128], dynamic_input=input_shapes) # mean_values=[0.5, 0.5, 0.5], std_values=[0.5, 0.5, 0.5],
|
29 |
+
print('done')
|
30 |
+
|
31 |
+
# Load ONNX model
|
32 |
+
print("--> Loading model")
|
33 |
+
ret = rknn.load_onnx(
|
34 |
+
model=ONNX_MODEL,
|
35 |
+
)
|
36 |
+
|
37 |
+
if ret != 0:
|
38 |
+
print('Load model failed!')
|
39 |
+
exit(ret)
|
40 |
+
print('done')
|
41 |
+
|
42 |
+
# Build model
|
43 |
+
print('--> Building model')
|
44 |
+
ret = rknn.build(do_quantization=QUANTIZE, dataset=DATASET, rknn_batch_size=None)
|
45 |
+
if ret != 0:
|
46 |
+
print('Build model failed!')
|
47 |
+
exit(ret)
|
48 |
+
print('done')
|
49 |
+
|
50 |
+
# export
|
51 |
+
print('--> Export RKNN model')
|
52 |
+
ret = rknn.export_rknn(RKNN_MODEL)
|
53 |
+
if ret != 0:
|
54 |
+
print('Export RKNN model failed!')
|
55 |
+
exit(ret)
|
56 |
+
print('done')
|
57 |
+
rknn.init_runtime(target='rk3588')
|
58 |
+
# # image embedding
|
59 |
+
# img_path = "test.jpg"
|
60 |
+
|
61 |
+
# normalize_mean = [0.5, 0.5, 0.5]
|
62 |
+
# normalize_std = [0.5, 0.5, 0.5]
|
63 |
+
|
64 |
+
# img = cv2.imread(img_path)
|
65 |
+
# img = cv2.resize(img, (448, 448))
|
66 |
+
# # img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
67 |
+
# img = img.astype(np.float32)
|
68 |
+
# # img = (img - normalize_mean) / normalize_std
|
69 |
+
# img = img[np.newaxis, :, :, :]
|
70 |
+
# img = img.transpose(0, 3, 1, 2)
|
71 |
+
# np.save("img.npy", img)
|
72 |
+
# rknn.accuracy_analysis(inputs=["img.npy"], target='rk3588')
|
73 |
+
# usage: python convert_rknn.py encoder|all
|
74 |
+
|
75 |
+
if __name__ == "__main__":
|
76 |
+
parser = argparse.ArgumentParser()
|
77 |
+
parser.add_argument("model", type=str, help="model to convert", choices=["encoder", "all"], nargs='?')
|
78 |
+
args = parser.parse_args()
|
79 |
+
if args.model is None:
|
80 |
+
args.model = "all"
|
81 |
+
if args.model == "encoder":
|
82 |
+
convert_encoder()
|
83 |
+
elif args.model == "all":
|
84 |
+
convert_encoder()
|
85 |
+
else:
|
86 |
+
print(f"Unknown model: {args.model}")
|
87 |
+
exit(1)
|
vision_export_onnx.py
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
import torch
|
3 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
4 |
+
|
5 |
+
MODEL_PATH = "../MiniCPM-V-2_6/"
|
6 |
+
DEVICE_MAP = "cpu"
|
7 |
+
|
8 |
+
origin_model = AutoModelForCausalLM.from_pretrained(
|
9 |
+
MODEL_PATH, trust_remote_code=True, attn_implementation='eager', device_map=DEVICE_MAP).eval()
|
10 |
+
|
11 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
|
12 |
+
|
13 |
+
for param in origin_model.parameters():
|
14 |
+
param.requires_grad = False
|
15 |
+
|
16 |
+
class VisionTransformer(torch.nn.Module):
|
17 |
+
def __init__(self):
|
18 |
+
super().__init__()
|
19 |
+
self.vpm = origin_model.vpm
|
20 |
+
self.resampler = origin_model.resampler
|
21 |
+
self.tgt_sizes = torch.Tensor([[32, 32]]).type(torch.int32)
|
22 |
+
|
23 |
+
def forward(self, pixel_values):
|
24 |
+
vit_embeds = self.vpm(pixel_values).last_hidden_state
|
25 |
+
vit_embeds = self.resampler(vit_embeds, self.tgt_sizes)
|
26 |
+
return vit_embeds
|
27 |
+
|
28 |
+
|
29 |
+
def convert_vision_transformer():
|
30 |
+
model = VisionTransformer()
|
31 |
+
IMAGE_SIZE = 448
|
32 |
+
pixel_values = torch.randn(
|
33 |
+
(1, 3, IMAGE_SIZE, IMAGE_SIZE))
|
34 |
+
|
35 |
+
# test first
|
36 |
+
vit_embeds = model(pixel_values)
|
37 |
+
print(vit_embeds.shape) #1x64x3584
|
38 |
+
if vit_embeds.shape != (1, 64, 3584):
|
39 |
+
raise ValueError("vit_embeds shape is not correct, something is wrong")
|
40 |
+
|
41 |
+
|
42 |
+
torch.onnx.export(model, pixel_values,
|
43 |
+
f'vision_transformer.onnx',
|
44 |
+
verbose=False,
|
45 |
+
input_names=['pixel_values'],
|
46 |
+
output_names=['vit_embeds'],
|
47 |
+
dynamic_axes={'pixel_values': {0: 'batch_size', 2: 'height', 3: 'width'},
|
48 |
+
'vit_embeds': {0: 'batch_size', 1: 'seq_len'}},
|
49 |
+
do_constant_folding=True,
|
50 |
+
opset_version=17)
|
51 |
+
|
52 |
+
if __name__ == "__main__":
|
53 |
+
convert_vision_transformer()
|
vision_transformer.rknn
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0d470c9d9b2c2b60fba30fb962a737ca578eb09f3d9d379e0a76684afd300984
|
3 |
+
size 988060799
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|