Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -280,6 +280,34 @@ print(f'User: {question}')
|
|
280 |
print(f'Assistant: {response}')
|
281 |
```
|
282 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
283 |
## License
|
284 |
|
285 |
This project is released under the MIT license.
|
|
|
280 |
print(f'Assistant: {response}')
|
281 |
```
|
282 |
|
283 |
+
## Deployment
|
284 |
+
|
285 |
+
### LMDeploy
|
286 |
+
|
287 |
+
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
|
288 |
+
|
289 |
+
```sh
|
290 |
+
pip install lmdeploy
|
291 |
+
```
|
292 |
+
|
293 |
+
You can run batch inference locally with the following python code:
|
294 |
+
|
295 |
+
> This model is not yet supported by LMDeploy.
|
296 |
+
|
297 |
+
```python
|
298 |
+
from lmdeploy.vl import load_image
|
299 |
+
from lmdeploy import ChatTemplateConfig, pipeline
|
300 |
+
|
301 |
+
model = 'OpenGVLab/InternVL2-4B'
|
302 |
+
system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态基础模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
|
303 |
+
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
304 |
+
chat_template_config = ChatTemplateConfig('phi-3')
|
305 |
+
chat_template_config.meta_instruction = system_prompt
|
306 |
+
pipe = pipeline(model, chat_template_config=chat_template_config)
|
307 |
+
response = pipe(('describe this image', image))
|
308 |
+
print(response)
|
309 |
+
```
|
310 |
+
|
311 |
## License
|
312 |
|
313 |
This project is released under the MIT license.
|