Run the setup_project.sh script to install the different models integrated in this project and its dependencies. The testing has been done in Ubuntu 20.04 LTS.
For the Blender plugin:
- In Blender, go to the Scripting tab
- Open
install_blender_addon.py - Run the script
This will install and enable the plugin that can be found by pressing N while on the Layout window
This plugin is used for generating synthetic photogrammetry datasets from 3D scenes. This plugin facilitates the creation of controlled camera setups, rendering of images with realistic camera effects, and exporting data for photogrammetry reconstruction and error analysis.
The plugin adds a PhotoGen tab in the 3D View sidebar with two main panels:
Camera Parameters:
focal-length: Sets the focal length for all generated camerassensor-size: Sets the sensor width for all generated camerascamera-size: Controls the display size of camera objects in the viewport
Camera Navigation:
- Delete all cameras
- View previous camera
- View next camera
Center Point:
- Select center of cameras generation
Spherical Distribution Parameters:
camera-distance: Distance from center point to camerasY-axis-min-angle: Minimum horizontal angle (0-360°)Y-axis-max-angle: Maximum horizontal angle (0-360°)X-axis-min-angle: Minimum vertical angle (0-180°)X-axis-max-angle: Maximum vertical angle (0-180°)
Uniform Distribution:
Y-axis-steps: Number of horizontal stepsX-axis-steps: Number of vertical stepsadded-noise: Random variation (0-1)- Generate cameras uniformly
Random Distribution:
number-of-cameras: Total number of cameras to generate- Generate cameras randomly
Rendering Parameters:
lens-distortion: Lens distortion amount (-1 to 1)chromatic-dispersion: Chromatic aberration amount (0-1)render-dir: Output directory for rendered data- Render scene
- Select working Bounding Box
- Update working Bounding Box
- Set up your 3D scene in Blender
- Delete present cameras by pressing Delete all cameras
- Create a center point for camera placement
- Configure camera parameters
- Generate cameras using uniform or random distribution
- Preview camera views
- Define the working bounding box
- Set rendering parameters and output directory
- Render the scene
The repository includes some already configured scenes for testing
The plugin generates:
- RGB images (PNG format)
- Depth maps (OpenEXR format)
- Normal maps (OpenEXR format)
- Scene geometry (PLY format)
- Camera parameters and metadata (JSON format)
The defined bounding box when rendering is used for computing metrics and defining the region of interest for the colored mesh representing errors. The bounding box defaults to a unit cube at the center, so if the area of interest on the scene lies outside it, the metrics will not account for it. For that, it's really important to set up the working bounding box to enclose the desired volume. This bounding box can be updated by pressing the UI element in the rendering parameters section.
This framework provides tools for executing different 3D reconstruction methods from images and evaluating their accuracy against ground truth. It supports multiple reconstruction algorithms and includes utilities for adding noise, computing metrics, and visualizing errors.
The framework consists of two main components:
exec_model.py: Main script for executing reconstruction methods and computing initial metricsmetrics_mesh.py: Additional script for computing precision/recall/fscore metrics and generating error-colored visualizations
alice: AliceVisioncolmap: COLMAPnerfacto: SDFStudio with Nerfacto methodbakedsdf: SDFStudio with BakedSDF methodfds: Fast Dipole Sums
Firstly, we need to run these commands on the parent folder of the project. The first one to set up AliceVision, and the second one to activate the Python environment:
source env_setup.sh
source env/bin/activatepython exec_model.py --method METHOD --input_dir INPUT_DIR --out_dir OUT_DIR \
--error_threshold ERROR_THRESHOLD --reconstruct RECONSTRUCT \
[optional parameters]--method: Method to use for reconstruction (alice,colmap,fds,nerfacto,bakedsdf)--input_dir: Directory containing input scene (where the "render" folder is located)--out_dir: Path to output directory where results will be stored--error_threshold: Error threshold for evaluation- Use percentage format (e.g.,
0.05%) for relative error based on bounding box diagonal - Use decimal format (e.g.,
0.01) for absolute error in world units
- Use percentage format (e.g.,
--reconstruct: Whether to run the reconstruction processTrue: Run the full reconstruction pipelineFalse: Only compute metrics on existing reconstruction. This option is useful to test ICP or to regenerate the CSV files without having to execute the full pipeline
--mono_prior: Whether to use mono-priors (normal & depth) for neural methods (default:False)--inside: Whether the reconstruction is inside a room (True) or a single object outside (False) (default:True). This affects (bakedsdf,fds).--use_icp: Whether to use ICP for alignment (default:False)--use_global_icp: Whether to use Go-ICP for initial alignment (default:False). Useful when the camera reconstruction fails.--export_csv: Whether to export results as CSV files (default:True)--image_noise: Image noise to apply to training images (default:none)none: No noisegaussian#mean#variance: Gaussian noise with specified mean and variancesalt_pepper#probability: Salt and pepper noise with specified probability
After running a reconstruction, you can generate additional metrics and error-visualized meshes:
python metrics_mesh.py --method METHOD --input_dir INPUT_DIR --out_dir OUT_DIR \
--error_threshold ERROR_THRESHOLD [--colormap COLORMAP]--method: Same method used for reconstruction--input_dir: Same input directory used for reconstruction--out_dir: Same output directory used for reconstruction--error_threshold: Error threshold for evaluation (same format as inexec_model.py)--colormap: Color map to use for error visualization (default:jet)
The framework computes multiple metrics to evaluate the reconstruction quality, depending on the --error_threshold the metrics will be normalized or not. If the threshold is relative, the distance metrics will be normalized by the diagonal of the working bounding box. If the threshold is an absolute value, the metrics will be computed in world units:
- Camera position error: Mean Euclidean distance between ground truth and reconstructed camera positions after alignment
- Camera direction error: Mean angular difference (in degrees) between ground truth and reconstructed camera orientations
- SFM point cloud error: Mean distance between sparse Structure from Motion points and the ground truth mesh
- Final point cloud error: Mean distance between the dense reconstructed point cloud and the ground truth mesh
- Final mesh error: Mean distance between points sampled from the reconstructed mesh and their closest points on the ground truth mesh
- Squared distance error: Mean squared Euclidean distance between reconstructed mesh points and ground truth mesh
- Hausdorff distance: Maximum distance between any point in the reconstructed mesh and its closest point on the ground truth mesh (worst-case error)
- Normal direction error: Mean angular difference (in degrees) between surface normals at corresponding points on the reconstructed and ground truth meshes
- Final point cloud depth error: Mean error between reconstructed point clouds and the ground truth depth maps, averaging errors across all cameras that observe each point.
- Final point cloud depth min error: Mean of the minimum errors for each reconstructed point. For each point, we find the minimum error across all cameras that observe it, then average these minimum values.
- Depth error: Mean error between point clouds generated from the reconstructed depth maps and the ground truth mesh
- Precision: Percentage of points in the reconstructed mesh that are within the error threshold distance of the ground truth mesh
- Recall: Percentage of points in the ground truth mesh that are within the error threshold distance of the reconstructed mesh
- F-score: Harmonic mean of precision and recall (2 × precision × recall)/(precision + recall)
- Points axis errors: Mean error along each camera-relative axis (x, y, z), useful for analyzing directional biases in reconstruction
The framework generates the following outputs:
- Reconstructed 3D model in the specified output directory
- JSON file with summary statistics (
stats.json) - CSV files with detailed point-wise error metrics in the
metricsfolder:inside_bbox_metrics.csv: Metrics for points inside the bounding boxinv_inside_bbox_metrics.csv: Reverse metrics (from ground truth to reconstruction)final_point_cloud_metrics.csv: Metrics for the entire point cloud
- Reconstructed 3D model centered with respect to the ground truth mesh in the
metricsfolder - Error-colored mesh visualization showing reconstruction quality with color-coding in the
metricsfolder
python exec_model.py --method colmap --input_dir ./scenes/room1 --out_dir ./results \
--error_threshold 0.05% --reconstruct Truepython exec_model.py --method nerfacto --input_dir ./scenes/statue --out_dir ./results \
--error_threshold 0.01 --reconstruct True --inside False \
--image_noise gaussian#0#0.01After running a reconstruction, you can visualize the error distribution on the reconstructed mesh:
python metrics_mesh.py --method colmap --input_dir ./scenes/room1 --out_dir ./results \
--error_threshold 0.05% --colormap viridisThis will generate a colored mesh where each point is colored according to its distance from the ground truth, providing an intuitive visualization of where reconstruction errors occur.
The project is licensed under MIT License. Please, see the license for further details.