Datasets:

Languages:
English
ArXiv:
License:
mattdeitke's picture
Update README.md
bbe71f1 unverified
|
raw
history blame
3.64 kB

🪐 Objaverse-XL Rendering Script

Blender generated with MidJourney

Scripts for rendering Objaverse-XL with Blender. Rendering is the process of taking pictures of the 3D objects. These images can then be used for training AI models.

🖥️ Setup

  1. Clone the repository and enter the rendering directory:
git clone https://github.com/allenai/objaverse-xl.git && \
  cd objaverse-xl/scripts/rendering
  1. Download Blender:
wget https://download.blender.org/release/Blender3.2/blender-3.2.2-linux-x64.tar.xz && \
  tar -xf blender-3.2.2-linux-x64.tar.xz && \
  rm blender-3.2.2-linux-x64.tar.xz
  1. If you're on a headless Linux server, install Xorg and start it:
sudo apt-get install xserver-xorg -y && \
  sudo python3 start_x_server.py start
  1. Install the Python dependencies. Note that Python >3.8 is required:
cd ../.. && \
  pip install -r requirements.txt && \
  pip install -e . && \
  cd scripts/rendering

📸 Example Usage

After setup, we can start to render objects using the main.py script:

python3 main.py

After running this, you should see 10 zip files located in ~/.objaverse/github/renders, which correspond to renders of objects from our example 3D objects repo:

> ls ~/.objaverse/github/renders
0fde27a0-99f0-5029-8e20-be9b8ecabb59.zip  54f7478b-4983-5541-8cf7-1ab2e39a842e.zip  93499b75-3ee0-5069-8f4b-1bab60d2e6d6.zip
21dd4d7b-b203-5d00-b325-0c041f43524e.zip  5babbc61-d4e1-5b5c-9b47-44994bbf958e.zip  ab30e24f-1046-5257-8806-2e346f4efebe.zip
415ca2d5-9d87-568c-a5ff-73048a084229.zip  5f6d2547-3661-54d5-9895-bebc342c753d.zip
44414a2a-e8f0-5a5f-bb58-6be50d8fd034.zip  8a170083-0529-547f-90ec-ebc32eafe594.zip

If we unzip one of the zip files:

> cd ~/.objaverse/github/renders
> unzip 0fde27a0-99f0-5029-8e20-be9b8ecabb59.zip

we will see that there is a new 0fde27a0-99f0-5029-8e20-be9b8ecabb59 directory. If we look in that directory, we'll find the following files:

> ls 0fde27a0-99f0-5029-8e20-be9b8ecabb59
000.npy  001.npy  002.npy  003.npy  004.npy  005.npy  006.npy  007.npy  008.npy  009.npy  010.npy  011.npy  metadata.json
000.png  001.png  002.png  003.png  004.png  005.png  006.png  007.png  008.png  009.png  010.png  011.png

Here, we see that there are 12 renders [000-011].png. Each render will look something like one of the 4 images shown below, but likely with the camera at a different location as its location is randomized during rendering:

temp

Additionally, there are 12 npy files [000-011].npy, which information about the cameras pose for a given render. We can read the npy files using:

import numpy as np
array = np.load("000.npy")

where array is now a 3x4 camera matrix that looks something like:

array([[6.07966840e-01,  7.93962419e-01,  3.18103019e-08,  2.10451518e-07],
       [4.75670159e-01, -3.64238620e-01,  8.00667346e-01, -5.96046448e-08],
       [6.35699809e-01, -4.86779213e-01, -5.99109232e-01, -1.66008198e+00]])

Note. USDZ support is experimental. Since Blender does not natively support usdz, we use this Blender addon, but it doesn't work with all types of USDZs. If you have a better solution, PRs are very much welcome 😄!