Vision-CAIR nielsr HF staff commited on
Commit
f72d9fe
1 Parent(s): 6ec3834

Add pipeline tag, link to paper (#2)

Browse files

- Add pipeline tag, link to paper (8e4c4c3fe8eea78680138f1073a5410353416ee1)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -4,6 +4,7 @@ datasets:
4
  - shenxq/VideoChat2
5
  base_model:
6
  - Vision-CAIR/LongVU_Qwen2_7B_img
 
7
  model-index:
8
  - name: llava-onevision-qwen-7b-ov
9
  results:
@@ -50,6 +51,8 @@ model-index:
50
  ---
51
  # LongVU
52
 
 
 
53
  Play with the model on the [HF demo](https://huggingface.co/spaces/Vision-CAIR/LongVU).
54
 
55
  <div align="left">
 
4
  - shenxq/VideoChat2
5
  base_model:
6
  - Vision-CAIR/LongVU_Qwen2_7B_img
7
+ pipeline_tag: video-text-to-text
8
  model-index:
9
  - name: llava-onevision-qwen-7b-ov
10
  results:
 
51
  ---
52
  # LongVU
53
 
54
+ This repository contains the model based on Qwen2-7B as presented in [LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding](https://huggingface.co/papers/2410.17434).
55
+
56
  Play with the model on the [HF demo](https://huggingface.co/spaces/Vision-CAIR/LongVU).
57
 
58
  <div align="left">