nielsr HF staff commited on
Commit
8e4c4c3
1 Parent(s): 6ec3834

Add pipeline tag, link to paper

Browse files

This PR improves the model card, by:
- making sure the model can be found at https://huggingface.co/models?pipeline_tag=video-text-to-text&sort=trending
- is linked to https://huggingface.co/papers/2410.17434

Would be great to update all the other model cards with this!

Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -4,6 +4,7 @@ datasets:
4
  - shenxq/VideoChat2
5
  base_model:
6
  - Vision-CAIR/LongVU_Qwen2_7B_img
 
7
  model-index:
8
  - name: llava-onevision-qwen-7b-ov
9
  results:
@@ -50,6 +51,8 @@ model-index:
50
  ---
51
  # LongVU
52
 
 
 
53
  Play with the model on the [HF demo](https://huggingface.co/spaces/Vision-CAIR/LongVU).
54
 
55
  <div align="left">
 
4
  - shenxq/VideoChat2
5
  base_model:
6
  - Vision-CAIR/LongVU_Qwen2_7B_img
7
+ pipeline_tag: video-text-to-text
8
  model-index:
9
  - name: llava-onevision-qwen-7b-ov
10
  results:
 
51
  ---
52
  # LongVU
53
 
54
+ This repository contains the model based on Qwen2-7B as presented in [LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding](https://huggingface.co/papers/2410.17434).
55
+
56
  Play with the model on the [HF demo](https://huggingface.co/spaces/Vision-CAIR/LongVU).
57
 
58
  <div align="left">