File size: 1,304 Bytes
4e10642
 
 
 
 
 
 
9f67662
4e10642
 
 
 
9f67662
4e10642
3c0655b
 
4e10642
9f67662
4e10642
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
inference: false
---

<br>
<br>

# LWM-128K-Jax Model Card

## Model details

**Model type:**
LWM-128K-Jax is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data, along with a large collection if image and video data. It is an auto-regressive vision-language model, based on the transformer architecture. These are the Jax / Flax version of the parameters.

The model is a Jax checkpoint. Inference code and instructions can be found at: https://github.com/LargeWorldModel/lwm

**Model date:**
LWM-128K-Jax was trained in January 2024.

**Paper or resources for more information:**
https://largeworldmodel.github.io/

## License
Llama 2 is licensed under the LLAMA 2 Community License, 
Copyright (c) Meta Platforms, Inc. All Rights Reserved.

**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues

## Training dataset
- Books3 dataset
- 700B text-image pairs from Laion-2B-en, filtered to only keep images with at least 256 resolution
- 400M text-image pairs from COYO-700M, filtered to only keep images with at least 256 resolution
- 10M text-video pairs from WebVid10M
- 3M text-video pairs from a subset of InternVid10M
- 73K text-video chat pairs from Valley-Instruct-73K 
- 100K text-video chat pairs from Video-ChatGPT