czczup commited on
Commit
e779c52
1 Parent(s): a8040d2

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +66 -42
README.md CHANGED
@@ -9,6 +9,8 @@ pipeline_tag: image-text-to-text
9
 
10
  [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) \[🌟 [魔搭社区](https://modelscope.cn/organization/OpenGVLab) | [教程](https://mp.weixin.qq.com/s/OUaVLkxlk1zhFb1cvMCFjg) \]
11
 
 
 
12
  ## Introduction
13
 
14
  We are excited to announce the release of InternVL 2.0, the latest addition to the InternVL series of multimodal large language models. InternVL 2.0 features a variety of **instruction-tuned models**, ranging from 1 billion to 108 billion parameters. This repository contains the instruction-tuned InternVL2-Llama3-76B model.
@@ -17,6 +19,16 @@ Compared to the state-of-the-art open-source multimodal large language models, I
17
 
18
  InternVL 2.0 is trained with an 8k context window and utilizes training data consisting of long texts, multiple images, and videos, significantly improving its ability to handle these types of inputs compared to InternVL 1.5. For more details, please refer to our blog and GitHub.
19
 
 
 
 
 
 
 
 
 
 
 
20
  ## Model Details
21
 
22
  InternVL 2.0 is a multimodal large language model series, featuring models of various sizes. For each size, we release instruction-tuned models optimized for multimodal tasks. InternVL2-Llama3-76B consists of [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5), an MLP projector, and [Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B).
@@ -25,27 +37,28 @@ InternVL 2.0 is a multimodal large language model series, featuring models of va
25
 
26
  ### Image Benchmarks
27
 
28
- | Benchmark | GPT-4T-20240409 | Gemini-1.5-Pro | InternVL2-40B | InternVL2-Llama3-76B |
29
- | :--------------------------: | :-------------: | :------------: | :-----------: | :------------------: |
30
- | Model Size | - | - | 40B | 76B |
31
- | | | | | |
32
- | DocVQA<sub>test</sub> | 87.2 | 86.5 | 93.9 | 94.1 |
33
- | ChartQA<sub>test</sub> | 78.1 | 81.3 | 86.2 | 88.4 |
34
- | InfoVQA<sub>test</sub> | - | 72.7 | 78.7 | 82.0 |
35
- | TextVQA<sub>val</sub> | - | 73.5 | 83.0 | 84.4 |
36
- | OCRBench | 678 | 754 | 837 | 839 |
37
- | MME<sub>sum</sub> | 2070.2 | 2110.6 | 2315.0 | 2414.7 |
38
- | RealWorldQA | 68.0 | 67.5 | 71.8 | 72.2 |
39
- | AI2D<sub>test</sub> | 89.4 | 80.3 | 87.1 | 87.6 |
40
- | MMMU<sub>val</sub> | 63.1 / 61.7 | 58.5 / 60.6 | 53.9 / 55.2 | 55.2 / 58.2 |
41
- | MMBench-EN<sub>test</sub> | 81.0 | 73.9 | 86.8 | 86.5 |
42
- | MMBench-CN<sub>test</sub> | 80.2 | 73.8 | 86.5 | 86.3 |
43
- | CCBench<sub>dev</sub> | 57.3 | 28.4 | 80.6 | 81.0 |
44
- | MMVet<sub>GPT-4-0613</sub> | - | - | 68.5 | 69.8 |
45
- | MMVet<sub>GPT-4-Turbo</sub> | 67.5 | 64.0 | 65.5 | 65.7 |
46
- | SEED-Image | - | - | 78.2 | 78.2 |
47
- | HallBench<sub>avg</sub> | 43.9 | 45.6 | 56.9 | 55.2 |
48
- | MathVista<sub>testmini</sub> | 58.1 | 57.7 | 63.7 | 65.5 |
 
49
 
50
  - We simultaneously use InternVL and VLMEvalKit repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
51
 
@@ -426,6 +439,16 @@ If you find this project useful in your research, please consider citing:
426
 
427
  InternVL 2.0 使用 8k 上下文窗口进行训练,训练数据包含长文本、多图和视频数据,与 InternVL 1.5 相比,其处理这些类��输入的能力显著提高。更多详细信息,请参阅我们的博客和 GitHub。
428
 
 
 
 
 
 
 
 
 
 
 
429
  ## 模型细节
430
 
431
  InternVL 2.0 是一个多模态大语言模型系列,包含各种规模的模型。对于每个规模的模型,我们都会发布针对多模态任务优化的指令微调模型。InternVL2-Llama3-76B 包含 [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)、一个 MLP 投影器和 [Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B)。
@@ -434,27 +457,28 @@ InternVL 2.0 是一个多模态大语言模型系列,包含各种规模的模
434
 
435
  ### 图像相关评测
436
 
437
- | 评测数据集 | GPT-4T-20240409 | Gemini-1.5-Pro | InternVL2-40B | InternVL2-Llama3-76B |
438
- | :--------------------------: | :-------------: | :------------: | :-----------: | :------------------: |
439
- | 模型大小 | - | - | 40B | 76B |
440
- | | | | | |
441
- | DocVQA<sub>test</sub> | 87.2 | 86.5 | 93.9 | 94.1 |
442
- | ChartQA<sub>test</sub> | 78.1 | 81.3 | 86.2 | 88.4 |
443
- | InfoVQA<sub>test</sub> | - | 72.7 | 78.7 | 82.0 |
444
- | TextVQA<sub>val</sub> | - | 73.5 | 83.0 | 84.4 |
445
- | OCRBench | 678 | 754 | 837 | 839 |
446
- | MME<sub>sum</sub> | 2070.2 | 2110.6 | 2315.0 | 2414.7 |
447
- | RealWorldQA | 68.0 | 67.5 | 71.8 | 72.2 |
448
- | AI2D<sub>test</sub> | 89.4 | 80.3 | 87.1 | 87.6 |
449
- | MMMU<sub>val</sub> | 63.1 / 61.7 | 58.5 / 60.6 | 53.9 / 55.2 | 55.2 / 58.2 |
450
- | MMBench-EN<sub>test</sub> | 81.0 | 73.9 | 86.8 | 86.5 |
451
- | MMBench-CN<sub>test</sub> | 80.2 | 73.8 | 86.5 | 86.3 |
452
- | CCBench<sub>dev</sub> | 57.3 | 28.4 | 80.6 | 81.0 |
453
- | MMVet<sub>GPT-4-0613</sub> | - | - | 68.5 | 69.8 |
454
- | MMVet<sub>GPT-4-Turbo</sub> | 67.5 | 64.0 | 65.5 | 65.7 |
455
- | SEED-Image | - | - | 78.2 | 78.2 |
456
- | HallBench<sub>avg</sub> | 43.9 | 45.6 | 56.9 | 55.2 |
457
- | MathVista<sub>testmini</sub> | 58.1 | 57.7 | 63.7 | 65.5 |
 
458
 
459
  - 我们同时使用 InternVL 和 VLMEvalKit 仓库进行模型评估。具体来说,DocVQA、ChartQA、InfoVQA、TextVQA、MME、AI2D、MMBench、CCBench、MMVet 和 SEED-Image 的结果是使用 InternVL 仓库测试的。MMMU、OCRBench、RealWorldQA、HallBench 和 MathVista 是使用 VLMEvalKit 进行评估的。
460
 
 
9
 
10
  [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) \[🌟 [魔搭社区](https://modelscope.cn/organization/OpenGVLab) | [教程](https://mp.weixin.qq.com/s/OUaVLkxlk1zhFb1cvMCFjg) \]
11
 
12
+ [切换至中文版](#简介)
13
+
14
  ## Introduction
15
 
16
  We are excited to announce the release of InternVL 2.0, the latest addition to the InternVL series of multimodal large language models. InternVL 2.0 features a variety of **instruction-tuned models**, ranging from 1 billion to 108 billion parameters. This repository contains the instruction-tuned InternVL2-Llama3-76B model.
 
19
 
20
  InternVL 2.0 is trained with an 8k context window and utilizes training data consisting of long texts, multiple images, and videos, significantly improving its ability to handle these types of inputs compared to InternVL 1.5. For more details, please refer to our blog and GitHub.
21
 
22
+ | Model Name | Vision Part | Language Part | HF Link | MS Link |
23
+ | :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: | :--------------------------------------------------------------------: |
24
+ | InternVL2-1B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-1B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-1B) |
25
+ | InternVL2-2B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-2B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-2B) |
26
+ | InternVL2-4B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-4B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-4B) |
27
+ | InternVL2-8B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2_5-7b-chat](https://huggingface.co/internlm/internlm2_5-7b-chat) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-8B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-8B) |
28
+ | InternVL2-26B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [internlm2-chat-20b](https://huggingface.co/internlm/internlm2-chat-20b) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-26B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-26B) |
29
+ | InternVL2-40B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-40B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-40B) |
30
+ | InternVL2-Llama3-76B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-Llama3-76B) |
31
+
32
  ## Model Details
33
 
34
  InternVL 2.0 is a multimodal large language model series, featuring models of various sizes. For each size, we release instruction-tuned models optimized for multimodal tasks. InternVL2-Llama3-76B consists of [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5), an MLP projector, and [Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B).
 
37
 
38
  ### Image Benchmarks
39
 
40
+ | Benchmark | GPT-4o-20240513 | Claude3.5-Sonnet | InternVL2-40B | InternVL2-Llama3-76B |
41
+ | :-----------------------------: | :-------------: | :--------------: | :-----------: | :------------------: |
42
+ | Model Size | - | - | 40B | 76B |
43
+ | | | | | |
44
+ | DocVQA<sub>test</sub> | 92.8 | 95.2 | 93.9 | 94.1 |
45
+ | ChartQA<sub>test</sub> | 85.7 | 90.8 | 86.2 | 88.4 |
46
+ | InfoVQA<sub>test</sub> | - | - | 78.7 | 82.0 |
47
+ | TextVQA<sub>val</sub> | - | - | 83.0 | 84.4 |
48
+ | OCRBench | 736 | 788 | 837 | 839 |
49
+ | MME<sub>sum</sub> | 2328.7 | 1920.0 | 2315.0 | 2414.7 |
50
+ | RealWorldQA | 75.4 | 60.1 | 71.8 | 72.2 |
51
+ | AI2D<sub>test</sub> | 94.2 | 94.7 | 87.1 | 87.6 |
52
+ | MMMU<sub>val</sub> | 69.1 / 69.2 | 68.3 / 65.9 | 53.9 / 55.2 | 55.2 / 58.2 |
53
+ | MMBench-EN<sub>test</sub> | 83.4 | 79.7 | 86.8 | 86.5 |
54
+ | MMBench-CN<sub>test</sub> | 82.1 | 80.7 | 86.5 | 86.3 |
55
+ | CCBench<sub>dev</sub> | 71.2 | 54.1 | 80.6 | 81.0 |
56
+ | MMVet<sub>GPT-4-0613</sub> | - | - | 68.5 | 69.8 |
57
+ | MMVet<sub>GPT-4-Turbo</sub> | 69.1 | 66.0 | 65.5 | 65.7 |
58
+ | SEED-Image | 77.1 | - | 78.2 | 78.2 |
59
+ | HallBench<sub>avg</sub> | 55.0 | 49.9 | 56.9 | 55.2 |
60
+ | MathVista<sub>testmini</sub> | 63.8 | 67.7 | 63.7 | 65.5 |
61
+ | OpenCompass<sub>avg-score</sub> | 69.9 | 67.9 | 69.7 | 70+ |
62
 
63
  - We simultaneously use InternVL and VLMEvalKit repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
64
 
 
439
 
440
  InternVL 2.0 使用 8k 上下文窗口进行训练,训练数据包含长文本、多图和视频数据,与 InternVL 1.5 相比,其处理这些类��输入的能力显著提高。更多详细信息,请参阅我们的博客和 GitHub。
441
 
442
+ | 模型名称 | 视觉部分 | 语言部分 | HF 链接 | MS 链接 |
443
+ | :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: | :--------------------------------------------------------------------: |
444
+ | InternVL2-1B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-1B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-1B) |
445
+ | InternVL2-2B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-2B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-2B) |
446
+ | InternVL2-4B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-4B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-4B) |
447
+ | InternVL2-8B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2_5-7b-chat](https://huggingface.co/internlm/internlm2_5-7b-chat) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-8B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-8B) |
448
+ | InternVL2-26B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [internlm2-chat-20b](https://huggingface.co/internlm/internlm2-chat-20b) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-26B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-26B) |
449
+ | InternVL2-40B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-40B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-40B) |
450
+ | InternVL2-Llama3-76B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-Llama3-76B) |
451
+
452
  ## 模型细节
453
 
454
  InternVL 2.0 是一个多模态大语言模型系列,包含各种规模的模型。对于每个规模的模型,我们都会发布针对多模态任务优化的指令微调模型。InternVL2-Llama3-76B 包含 [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)、一个 MLP 投影器和 [Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B)。
 
457
 
458
  ### 图像相关评测
459
 
460
+ | 评测数据集 | GPT-4o-20240513 | Claude3.5-Sonnet | InternVL2-40B | InternVL2-Llama3-76B |
461
+ | :-----------------------------: | :-------------: | :--------------: | :-----------: | :------------------: |
462
+ | 模型大小 | - | - | 40B | 76B |
463
+ | | | | | |
464
+ | DocVQA<sub>test</sub> | 92.8 | 95.2 | 93.9 | 94.1 |
465
+ | ChartQA<sub>test</sub> | 85.7 | 90.8 | 86.2 | 88.4 |
466
+ | InfoVQA<sub>test</sub> | - | - | 78.7 | 82.0 |
467
+ | TextVQA<sub>val</sub> | - | - | 83.0 | 84.4 |
468
+ | OCRBench | 736 | 788 | 837 | 839 |
469
+ | MME<sub>sum</sub> | 2328.7 | 1920.0 | 2315.0 | 2414.7 |
470
+ | RealWorldQA | 75.4 | 60.1 | 71.8 | 72.2 |
471
+ | AI2D<sub>test</sub> | 94.2 | 94.7 | 87.1 | 87.6 |
472
+ | MMMU<sub>val</sub> | 69.1 / 69.2 | 68.3 / 65.9 | 53.9 / 55.2 | 55.2 / 58.2 |
473
+ | MMBench-EN<sub>test</sub> | 83.4 | 79.7 | 86.8 | 86.5 |
474
+ | MMBench-CN<sub>test</sub> | 82.1 | 80.7 | 86.5 | 86.3 |
475
+ | CCBench<sub>dev</sub> | 71.2 | 54.1 | 80.6 | 81.0 |
476
+ | MMVet<sub>GPT-4-0613</sub> | - | - | 68.5 | 69.8 |
477
+ | MMVet<sub>GPT-4-Turbo</sub> | 69.1 | 66.0 | 65.5 | 65.7 |
478
+ | SEED-Image | 77.1 | - | 78.2 | 78.2 |
479
+ | HallBench<sub>avg</sub> | 55.0 | 49.9 | 56.9 | 55.2 |
480
+ | MathVista<sub>testmini</sub> | 63.8 | 67.7 | 63.7 | 65.5 |
481
+ | OpenCompass<sub>avg-score</sub> | 69.9 | 67.9 | 69.7 | 70+ |
482
 
483
  - 我们同时使用 InternVL 和 VLMEvalKit 仓库进行模型评估。具体来说,DocVQA、ChartQA、InfoVQA、TextVQA、MME、AI2D、MMBench、CCBench、MMVet 和 SEED-Image 的结果是使用 InternVL 仓库测试的。MMMU、OCRBench、RealWorldQA、HallBench 和 MathVista 是使用 VLMEvalKit 进行评估的。
484