Datasets:

Languages:
English
ArXiv:
License:
mattdeitke commited on
Commit
e36ea65
1 Parent(s): b9c419b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -40,7 +40,7 @@ A ton more examples in the [📝 paper](https://arxiv.org/abs/2307.05663) :)
40
 
41
  With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!
42
 
43
- <video autoplay muted>
44
  <source src="https://github.com/allenai/objaverse-rendering/assets/28768645/17981b67-5f43-4619-b4b6-aeb79fb9c1e2" type="video/mp4">
45
  </video>
46
 
@@ -48,7 +48,7 @@ With the base Zero123-XL foundation model, we can perform image → 3D using [Dr
48
 
49
  Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
50
 
51
- <video autoplay muted>
52
  <source src="https://github.com/allenai/objaverse-rendering/assets/28768645/10d621b5-b4ee-45dd-88c9-19e529fcecd4" type="video/mp4">
53
  </video>
54
 
 
40
 
41
  With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!
42
 
43
+ <video autoplay muted loop>
44
  <source src="https://github.com/allenai/objaverse-rendering/assets/28768645/17981b67-5f43-4619-b4b6-aeb79fb9c1e2" type="video/mp4">
45
  </video>
46
 
 
48
 
49
  Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
50
 
51
+ <video autoplay muted loop>
52
  <source src="https://github.com/allenai/objaverse-rendering/assets/28768645/10d621b5-b4ee-45dd-88c9-19e529fcecd4" type="video/mp4">
53
  </video>
54