ZhangYuanhan commited on
Commit
7159786
1 Parent(s): 70af2bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -116,7 +116,7 @@ base_model:
116
  - lmms-lab/llava-onevision-qwen2-7b-si
117
  ---
118
 
119
- # LLaVA-NeXT-Video-7B-Qwen2-video-only
120
 
121
  ## Table of Contents
122
 
@@ -142,7 +142,7 @@ This model support at most 110 frames.
142
 
143
  ### Intended use
144
 
145
- The model was trained on [LLaVA-NeXT-Video-178K](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Video-SFT-Data) and have the ability to interact with images, multi-image and videos, but specific to videos.
146
 
147
  **Feel free to share your generations in the Community tab!**
148
 
@@ -183,7 +183,7 @@ def load_video(self, video_path, max_frames_num,fps=1,force_sample=False):
183
  spare_frames = vr.get_batch(frame_idx).asnumpy()
184
  # import pdb;pdb.set_trace()
185
  return spare_frames,frame_time,video_time
186
- pretrained = "lmms-lab/LLaVA-NeXT-Video-7B-Qwen2"
187
  model_name = "llava_qwen"
188
  device = "cuda"
189
  device_map = "auto"
 
116
  - lmms-lab/llava-onevision-qwen2-7b-si
117
  ---
118
 
119
+ # LLaVA-NeXT-Video-7B-Qwen2-Video-Only
120
 
121
  ## Table of Contents
122
 
 
142
 
143
  ### Intended use
144
 
145
+ The model was trained on [LLaVA-NeXT-Video-178K](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Video-SFT-Data) and have the ability to interact with videos.
146
 
147
  **Feel free to share your generations in the Community tab!**
148
 
 
183
  spare_frames = vr.get_batch(frame_idx).asnumpy()
184
  # import pdb;pdb.set_trace()
185
  return spare_frames,frame_time,video_time
186
+ pretrained = "lmms-lab/LLaVA-NeXT-Video-7B-Qwen2-Video-Only"
187
  model_name = "llava_qwen"
188
  device = "cuda"
189
  device_map = "auto"