Update README.md to imrove the tags and other minor things (#1)
Browse files- Update README.md (256bb60a33d3232480699d16d259bfeb71ca6abb)
Co-authored-by: Sayak Paul <[email protected]>
README.md
CHANGED
@@ -1,6 +1,11 @@
|
|
1 |
---
|
|
|
2 |
datasets:
|
3 |
- laion/laion400m
|
|
|
|
|
|
|
|
|
4 |
language:
|
5 |
- en
|
6 |
---
|
@@ -22,7 +27,7 @@ This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that genera
|
|
22 |
## Intended uses
|
23 |
|
24 |
You can use this model to generate RGB and depth map given a text prompt.
|
25 |
-
A short video summarizing the approach can be found at [this url](https://t.ly/tdi2) and a VR demo can be found [here](https://www.youtube.com/watch?v=3hbUo-hwAs0)
|
26 |
|
27 |
|
28 |
### How to use
|
@@ -47,7 +52,7 @@ depth_image[0].save(name+"_ldm3d_depth.png")
|
|
47 |
### Limitations and bias
|
48 |
|
49 |
For the image generation, limitations and bias are the same as the ones from [Stable diffusion](https://huggingface.co/CompVis/stable-diffusion-v1-4#limitations)
|
50 |
-
For the depth map generation, limitations and bias are the same as the ones from [DPT](https://huggingface.co/Intel/dpt-large)
|
51 |
|
52 |
|
53 |
## Training data
|
@@ -67,11 +72,11 @@ The figure below shows some qualitative results comparing our method with (Stabl
|
|
67 |
### BibTeX entry and citation info
|
68 |
```bibtex
|
69 |
@misc{stan2023ldm3d,
|
70 |
-
title={LDM3D: Latent Diffusion Model for 3D},
|
71 |
-
author={Gabriela Ben Melech Stan and Diana Wofk and Scottie Fox and Alex Redden and Will Saxton and Jean Yu and Estelle Aflalo and Shao-Yen Tseng and Fabio Nonato and Matthias Muller and Vasudev Lal},
|
72 |
-
year={2023},
|
73 |
-
eprint={2305.10853},
|
74 |
-
archivePrefix={arXiv},
|
75 |
-
primaryClass={cs.CV}
|
76 |
}
|
77 |
```
|
|
|
1 |
---
|
2 |
+
license: creativeml-openrail-m
|
3 |
datasets:
|
4 |
- laion/laion400m
|
5 |
+
tags:
|
6 |
+
- stable-diffusion
|
7 |
+
- stable-diffusion-diffusers
|
8 |
+
- text-to-image
|
9 |
language:
|
10 |
- en
|
11 |
---
|
|
|
27 |
## Intended uses
|
28 |
|
29 |
You can use this model to generate RGB and depth map given a text prompt.
|
30 |
+
A short video summarizing the approach can be found at [this url](https://t.ly/tdi2) and a VR demo can be found [here](https://www.youtube.com/watch?v=3hbUo-hwAs0).
|
31 |
|
32 |
|
33 |
### How to use
|
|
|
52 |
### Limitations and bias
|
53 |
|
54 |
For the image generation, limitations and bias are the same as the ones from [Stable diffusion](https://huggingface.co/CompVis/stable-diffusion-v1-4#limitations)
|
55 |
+
For the depth map generation, limitations and bias are the same as the ones from [DPT](https://huggingface.co/Intel/dpt-large).
|
56 |
|
57 |
|
58 |
## Training data
|
|
|
72 |
### BibTeX entry and citation info
|
73 |
```bibtex
|
74 |
@misc{stan2023ldm3d,
|
75 |
+
title={LDM3D: Latent Diffusion Model for 3D},
|
76 |
+
author={Gabriela Ben Melech Stan and Diana Wofk and Scottie Fox and Alex Redden and Will Saxton and Jean Yu and Estelle Aflalo and Shao-Yen Tseng and Fabio Nonato and Matthias Muller and Vasudev Lal},
|
77 |
+
year={2023},
|
78 |
+
eprint={2305.10853},
|
79 |
+
archivePrefix={arXiv},
|
80 |
+
primaryClass={cs.CV}
|
81 |
}
|
82 |
```
|