Datasets:

Languages:
English
ArXiv:
License:
File size: 3,519 Bytes
6da6f83
 
de861da
 
 
6da6f83
de861da
 
 
e7f8693
 
 
 
1f2a247
 
 
 
 
 
2bd4d95
 
 
1f2a247
 
 
e7f8693
1f2a247
a2ad701
50c8c35
 
d69bd7b
 
 
 
 
 
 
 
b6019be
 
5cb5a42
b6019be
e4ec8a6
b6019be
 
 
 
 
 
 
e4ec8a6
b6019be
 
 
 
 
5cb5a42
b6019be
 
 
e51c4f3
 
 
 
de861da
 
1f2a247
de861da
 
 
 
 
 
 
 
 
 
 
 
 
bbf736b
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: odc-by
language:
- en
viewer: false
---

# Objaverse-XL

<a href="//arxiv.org/abs/2307.05663" target="_blank">
    <img src="https://img.shields.io/badge/arXiv-2307.05663-<COLOR>">
</a>

_Uploading is currently in progress!_

Objaverse-XL is an open dataset of over 10 million 3D objects!

With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇

<img src="https://mattdeitke.com/static/1cdcdb2ef7033e177ca9ae2975a9b451/9c1ca/objaverse-xl.webp">


## Scale Comparison

Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.

Objaverse-XL is over an order of magnitude larger and much more diverse!

<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/43833dd3-ec97-4a3d-8782-00a6aea584b4">

## Unlocking Generalization

Compared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!

A ton more examples in the [📝 paper](https://arxiv.org/abs/2307.05663) :)

<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/8470e4df-e39d-444b-9871-58fbee4b87fd">

## Image → 3D

With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!

<video autoplay muted loop controls>
  <source src="https://github.com/allenai/objaverse-rendering/assets/28768645/17981b67-5f43-4619-b4b6-aeb79fb9c1e2" type="video/mp4">
</video>

## Text → 3D

Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!

<video autoplay muted loop controls>
  <source src="https://github.com/allenai/objaverse-rendering/assets/28768645/10d621b5-b4ee-45dd-88c9-19e529fcecd4" type="video/mp4">
</video>

## Scaling Trends

Beyond that, we show strong scaling trends for both Zero123-XL and [PixelNeRF](https://alexyu.net/pixelnerf/)!

<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/0c8bb433-27df-43a1-8cb8-1772007c0899">

## License

The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.

## Citation

To cite Objaverse-XL, please cite our [📝 arXiv](https://arxiv.org/abs/2307.05663) paper with the following BibTeX entry:

```bibtex
@article{objaverseXL,
  title={Objaverse-XL: A Universe of 10M+ 3D Objects},
  author={Matt Deitke and Ruoshi Liu and Matthew Wallingford and Huong Ngo and
          Oscar Michel and Aditya Kusupati and Alan Fan and Christian Laforte and
          Vikram Voleti and Samir Yitzhak Gadre and Eli VanderBilt and
          Aniruddha Kembhavi and Carl Vondrick and Georgia Gkioxari and
          Kiana Ehsani and Ludwig Schmidt and Ali Farhadi},
  journal={arXiv preprint arXiv:2307.05663},
  year={2023}
}
```

Objaverse 1.0 is available on 🤗Hugging Face at [@allenai/objaverse](https://huggingface.co/datasets/allenai/objaverse). To cite it, use:

```bibtex
@article{objaverse,
  title={Objaverse: A Universe of Annotated 3D Objects},
  author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and
          Oscar Michel and Eli VanderBilt and Ludwig Schmidt and
          Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
  journal={arXiv preprint arXiv:2212.08051},
  year={2022}
}
```