Skip to content

Commit fc55140

Browse files
committed
Merge branch 'main' of github.com:maximeraafat/BlenderNeRF
2 parents 65a4956 + 907af41 commit fc55140

File tree

1 file changed

+80
-48
lines changed

1 file changed

+80
-48
lines changed

README.md

Lines changed: 80 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,112 +1,144 @@
1-
# Blender x NeRF
1+
# BlenderNeRF
22

3-
**Blender x NeRF** is a Blender add-on for creating virtual datasets, leveraged by an AI to synthesize images from unseen viewpoints. The constructed datasets can directly be utilised for training and testing a *Neural Radiance Field ([NeRF](https://www.matthewtancik.com/nerf))* model, yielding AI predicted images in a matter of seconds.
3+
Whether a VFX artist, a research fellow or a graphics amateur, **BlenderNeRF** is the easiest and fastest way to create synthetic NeRF datasets within Blender. Obtain renders and camera parameters with a single click, while having full user control over the 3D scene and camera!
44

5-
This quick and user friendly tool attempts to narrow the gap between the artistic creation process and state-of-the-art research in computer graphics and vision.
5+
6+
## Neural Radiance Fields
7+
8+
**Neural Radiance Fields ([NeRF](https://www.matthewtancik.com/nerf))** aim at representing a 3D scene as a view dependent volumetric object from 2D images only, alongside their respective camera information. The 3D scene is reverse engineered from the training images with help of a simple neural network.
9+
10+
I recommend watching [this YouTube video](https://www.youtube.com/watch?v=YX5AoaWrowY) by **Corridor Crew** for a thrilling investigation on a few use cases and future potential applications of NeRFs.
611

712
<p align='center'>
813
<img src='https://maximeraafat.github.io/assets/posts/donut_3/Donut3_compressed.gif' width='400'/>
914
<img src='https://maximeraafat.github.io/assets/posts/donut_3/Donut3_NeRF_compressed.gif' width='400'>
1015
<br>
1116
<b>
12-
Left : traditional rendering with Eevee renderer
17+
Left : traditional rendering with Eevee
1318
<br>
14-
Right : NeRF-synthesized images
19+
Right : NeRF rendering
1520
</b>
1621
</p>
1722

23+
1824
## Motivation
1925

20-
Rendering is a computationally intensive process ; generating photorealistic scenes can take seconds to hours depending on the scene complexity, hardware properties and the computational resources available to the 3D software.
26+
Rendering is an expensive computation. Photorealistic scenes can take seconds to hours to render depending on the scene complexity, hardware and available software resources.
2127

22-
While rendering might be considered a straight forward process for 3D artists, obtaining the additional camera information necessary for NeRF can be discouraging, even for python familiar users or machine learning developers. This add-on aims at solving this issue, enabling artists to easily integrate AI in their creative flow while also facilitating research.
28+
NeRFs can speed up this process, but require camera information typically extracted via cumbersome code. This add-on enables anyone to get renders and cameras with a single click in Blender.
2329

2430

2531
## Installation
2632

2733
1. Download this repository as a ZIP file
28-
2. Open Blender (3.0.0 or above. For previous versions, see the [Upcoming](#upcoming) section)
29-
3. In Blender, head to *Edit > Preferences > Add-ons*, and click *Install...*
30-
4. Select the downloaded ZIP file, and activate the add-on (Object: Blender x NeRF)
34+
2. Open Blender (3.0.0 or above)
35+
3. In Blender, head to **Edit > Preferences > Add-ons**, and click **Install...**
36+
4. Select the downloaded **ZIP** file, and activate the add-on (**Object: BlenderNeRF**)
3137

3238

3339
## Setting
3440

35-
**Blender x NeRF** proposes 3 methods, which are discussed in the sub-sections below. From now on when mentioning *training* data, I will refer to the data required by NeRF to *train* (or teach) the AI model. Similarly, the *testing* data will refer to the images predicted by the AI.
36-
When executed, each of the 3 methods generates an archived ZIP file, containing a training and testing folder. Both folders contain a `transforms_train.json` file, respectively `transforms_test.json` file, with the necessary camera information for NeRF to properly train and test on images.
41+
**BlenderNeRF** consists of 3 methods discussed in the sub-sections below. Each method is capable of creating **training** data and **testing** data for NeRF in the form of training images and a `transforms_train.json` respectively `transforms_test.json` file with the corresponding camera information. The data is archived into a single **ZIP** file containing training and testing folders. Training data is used by the NeRF model to learn the 3D scene representation. Once trained, the model can be evaluated (or tested) on the testing data (only camera information) to obtain novel renders.
3742

38-
### SOF : Subset of Frames
43+
### Subset of Frames
3944

40-
SOF renders every N frames from a camera animation, and uses those as training data for NeRF. Testing will be executed on all animation frames, that is, both training frames and the remaining ones.
45+
**Subset of Frames (SOF)** renders every **N** frames from a camera animation, and utilises the rendered subset of frames as NeRF training data. The registered testing data spans over all frames of the same camera amimation, including training frames. When trained, the NeRF model can render the full camera animation and is consequently particularly suited for rendering large animation of static scenes.
4146

42-
### TTC : Train and Test Cameras
47+
### Train and Test Cameras
4348

44-
TTC registers training and testing data from two separate user defined cameras. NeRF will then train on all animation frames rendered with the training camera, and predict all frames seen by the testing camera.
49+
**Train and Test Cameras (TTC)** registers training and testing data from two separate user defined cameras. A NeRF model can then be fitted with the data extracted from the training camera, and be evaluated on the testing data.
4550

46-
### COS : Camera on Sphere (upcoming)
51+
### Camera on Sphere
4752

48-
COS renders frames from random views on a sphere while looking at its center, with user defined radius and center location. Those frames will then serve as training data for NeRF. Testing frames are still to be decided (perhaps a predefined spherical trajectory, or a user defined camera path).
53+
**Camera on Sphere (COS)** renders training frames by uniformly sampling random camera views directed at the center from a user controlled sphere. Testing data is extracted from a selected camera.
4954

5055

51-
## How to use the add-on
56+
## How to use the Methods
5257

53-
The add-on properties panel is available under `3D View > N panel > BlenderNeRF` (the N panel is accessible under the 3D viewport when pressing *N*). All 3 methods (**SOF**, **TTC** and **COS**) share a common tab called `Blender x NeRF shared UI`, which appears per default at the top of the add-on panel. The shared tab contains the below listed controllable properties.
58+
The add-on properties panel is available under `3D View > N panel > BlenderNeRF` (the **N panel** is accessible under the 3D viewport when pressing `N`). All 3 methods (**SOF**, **TTC** and **COS**) share a common tab called `BlenderNeRF shared UI` with the below listed controllable properties.
5459

55-
* `Train` (activated by default) : whether to register training data (camera information + renderings)
60+
* `Train` (activated by default) : whether to register training data (renderings + camera information)
5661
* `Test` (activated by default) : whether to register testing data (camera information only)
57-
* `AABB` (by default set to *4*) : aabb scale parameter as described in Instant NGP (more details below)
62+
* `AABB` (by default set to **4**) : aabb scale parameter as described in Instant NGP (more details below)
5863
* `Render Frames` (activated by default) : whether to render the frames
59-
* `Save Path` (empty by default) : path to the output directory in which the dataset will be stored
64+
* `Save Path` (empty by default) : path to the output directory in which the dataset will be created
6065

66+
`AABB` is restricted to be an integer power of 2, it defines the side length of the bounding box volume in which NeRF will trace rays. The property was introduced in **NVIDIA's [Instant NGP](https://github.com/NVlabs/instant-ngp)** version of NeRF, currently the only supported version. Future releases of this add-on will introduce support for the original NeRF camera conventions.
6167

62-
`AABB` is restricted to be an integer power of 2, and defines the side length of the bounding box volume in which NeRF will trace rays. The property was introduced in *NVIDIA's [Instant NGP](https://github.com/NVlabs/instant-ngp)* version of NeRF, which is currently the only supported version. Future releases of this add-on might support different versions.
68+
Notice that each method has its separate distinctive `Name` property (by default set to `dataset`) corresponding to the dataset name and created **ZIP** filename for the respective method. Please note that unsupported characters, such as spaces, `#` or `/`, will automatically be replaced by an underscore.
6369

64-
Notice that each method has its separate distinctive `Name` property (by default set to *dataset*), which corresponds to the name of the dataset and ZIP file that will be created for the respective method. Customizing the dataset name for one method will not affect the other method's `Name` properties. Please avoid using unsupported characters (such as spaces, #, or /), as those characters will all be replaced by an underscore.
65-
66-
Below are described the properties specific to each method (the `Name` property is left out, as already discussed above).
70+
Below are described the properties specific to each method (the `Name` property is left out, since already discussed above).
6771

6872
### How to SOF
6973

70-
* `Frame Step` (by default set to *3*) : N (as defined in the [Setting](#setting)) = frequency at which we render the frames for training
74+
* `Frame Step` (by default set to **3**) : **N** (as defined in the [Setting](#setting) section) = frequency at which the training frames are registered
7175
* `Camera` (always set to the activate camera) : camera used for registering training and testing data
72-
* `PLAY SOF` : play the *Subset of Frames* method
76+
* `PLAY SOF` : play the **Subset of Frames** method
7377

7478
### How to TTC
7579

7680
* `Train Cam` (empty by default) : camera used for registering the training data
7781
* `Test Cam` (empty by default) : camera used for registering the testing data
78-
* `PLAY TTC` : play the *Train and Test Cameras* method
82+
* `PLAY TTC` : play the **Train and Test Cameras** method
83+
84+
### How to COS
85+
86+
* `Camera` (always set to the activate camera) : camera used for registering the testing data
87+
* `Location` (by default set to **0m** vector) : center position of the training sphere from which camera views are sampled
88+
* `Rotation` (by default set to **** vector) : rotation of the training sphere from which camera views are sampled
89+
* `Scale` (by default set to **1** vector) : scale vector of the training sphere in xyz axes
90+
* `Radius` (by default set to **4m**) : radius scalar of the training sphere
91+
* `Lens` (by default set to **50mm**) : focal length of the training camera
92+
* `Seed` (by default set to **0**) : seed to initialize the random camera view sampling procedure
93+
* `Frames` (by default set to **100**) : number of training frames randomly sampled and rendered from the training sphere
94+
* `Sphere` (deactivated by default) : whether to show the training sphere from which random views will be sampled
95+
* `Camera` (deactivated by default) : whether to show the camera used for registering the training data
96+
* `Upper Views` (deactivated by default) : whether to sample views from the upper hemisphere of the training sphere only
97+
* `PLAY COS` : play the **Camera on Sphere** method
98+
99+
Note that activating the `Sphere` and `Camera` properties create a `BlenderNeRF Sphere` empty object and a `BlenderNeRF Camera` camera object respectively. Please do not create any objects with these names manually, since this might break the add-on functionalities.
79100

80-
### How to COS (upcoming)
101+
Training frames will be captured using the `BlenderNeRF Camera` object. Keep in mind that the training camera is locked in place and cannot manually be moved. The COS method will construct the training data from frame 1 to `Frames`, irrespectively of the scene frame range. Finally, note that the `Upper Views` property is rotation variant.
81102

82103

83104
## Tips for optimal results
84105

85-
As already specified in the previous section, the add-on currently only supports *NVIDIA's [Instant NGP](https://github.com/NVlabs/instant-ngp)* version of NeRF. Feel free to visit their repository for detailed instructions on how to obtain realistic predicted images, or technicalities on their lightning fast NeRF implementation. Below are some quick tips for optimal NeRF training and testing.
106+
As mentioned previously, the add-on currently only supports **NVIDIA's [Instant NGP](https://github.com/NVlabs/instant-ngp)** version of NeRF. Feel free to visit their repository for further help. Below are some quick tips for optimal NeRFing.
86107

87108
* NeRF trains best with 50 to 150 images
88-
* Testing views should not deviate to much from training views (applies especially to TTC)
89-
* Scene movement, motion blur or blurring artefacts, degrade the predicted quality. Avoid them if possible.
90-
* The object should be at least one Blender unit away from the camera : the closer the object is to the camera, the lower you should set `AABB`. Keep it as low as possible, as higher values will increase the training time
91-
* If the predicted quality seems blurry, start with adjusting `AABB`, while keeping it a power of 2
92-
* Do not adjust the camera focal length during the animation, as Instant NGP only supports a single focal length as input to NeRF
93-
94-
Unfortunately, NeRF is not capable of predicting transparent pixels for RGBA images : the method predicts for each pixel a color and a density. Transparency (e.g., under the form of a transparent background) results in invalid density values, causing the transparent background in your training images to be replaced by a monochrome color.
109+
* Testing views should not deviate too much from training views
110+
* Scene movement, motion blur or blurring artefacts can degrade the reconstruction quality
111+
* The captured scene should be at least one Blender unit away from the camera
112+
* The closer the camera to the captured scene, the lower you can set `AABB`
113+
* Higher `AABB` will increase training time, keep it as low as possible
114+
* If the reconstruction quality appears blurry, start by adjusting `AABB` while keeping it a power of 2
115+
* Avoid adjusting the camera focal lengths during the animation, the vanilla NeRF methods do not support multiple focal lengths
95116

96117

97118
## How to run NeRF
98119

99-
If you possess an NVIDIA GPU, you might want to install [Instant NGP](https://github.com/NVlabs/instant-ngp) on your own device for an optimal user experience with a GUI by following the instructions provided in their GitHub repository. Otherwise, you can run NeRF in a COLAB notebook on Google GPUs for free (all you need is a Google account).
120+
If you have access to an NVIDIA GPU, you might want to install [Instant NGP](https://github.com/NVlabs/instant-ngp) on your own device for an optimal user experience with a GUI by following the instructions provided in their GitHub repository. Otherwise, you can run NeRF in a COLAB notebook on Google GPUs for free (all you need is a Google account).
100121

101122
Open this [COLAB notebook](https://colab.research.google.com/drive/1CtF_0FgwzCZMYQzGXbye2iVS1ZLlq9Tw?usp=sharing) (also downloadable [here](https://gist.github.com/maximeraafat/122a63c81affd6d574c67d187b82b0b0)) and follow the instructions.
102123

103124

125+
## Remarks
126+
127+
This add-on is being developed as a fun side project over the course of multiple months and versions of Blender, mainly on macOS. If you encouter any issues with the plug-in functionalities, feel free to open a GitHub issue with a clear description of the problem, which **BlenderNeRF** version the issues have been experienced with, and any further information if relevant.
128+
129+
130+
If you find this repository useful in your research, please consider citing **BlenderNeRF** using the dedicated button above. \
131+
If you made use of **BlenderNeRF** in your artistic projects, feel free to share some of your work using the `#blendernerf` hashtag on social media! :)
132+
133+
104134
## Upcoming
105-
* Support for previous blender versions
106-
* If frames have already been rendered, enable the possibility to copy the already rendered frames to the dataset instead of rendering them again
107-
* COS method (add-on release version 3.0)
108-
* Support for other NeRF implementations, for example [Torch NGP](https://github.com/ashawkey/torch-ngp)?
109-
* Once all methods are released : publish simple explanatory tutorial video
110-
<!--
111-
* Enable user defined NeRF resolution in Notebook and COLAB : if set to 0, use Blender scene resolution
112-
-->
135+
136+
Below are presented ideas for upcoming features sorted by priority in descending order.
137+
138+
- [ ] Evaluations and demonstrations for each method on various scenes
139+
- [ ] Support for the original NeRF `transforms.json` convention
140+
- [ ] Support for previous blender versions
141+
- [ ] Enable option to create log file for reproducibility containing add-on version, used method and all respective parameters...
142+
- [ ] Enable user defined NeRF output resolution in COLAB notebook : if width or length = 0, use default scene resolution
143+
- [ ] Enable option to copy already rendered frames from a set of images instead of rendering them again
144+
- [ ] Add-on updater button (see [this video](https://www.youtube.com/watch?v=usjPdfMHE9c&t))?

0 commit comments

Comments
 (0)