Skip to content

UPC-ViRVIG/PhotogrammetryErrors

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Installation

Run the setup_project.sh script to install the different models integrated in this project and its dependencies. The testing has been done in Ubuntu 20.04 LTS.

For the Blender plugin:

  1. In Blender, go to the Scripting tab
  2. Open install_blender_addon.py
  3. Run the script

This will install and enable the plugin that can be found by pressing N while on the Layout window

General Overview

Blender Plugin

This plugin is used for generating synthetic photogrammetry datasets from 3D scenes. This plugin facilitates the creation of controlled camera setups, rendering of images with realistic camera effects, and exporting data for photogrammetry reconstruction and error analysis.

User Interface

The plugin adds a PhotoGen tab in the 3D View sidebar with two main panels:

Cameras Generator Panel

Camera Parameters:

  • focal-length: Sets the focal length for all generated cameras
  • sensor-size: Sets the sensor width for all generated cameras
  • camera-size: Controls the display size of camera objects in the viewport

Camera Navigation:

  • Delete all cameras
  • View previous camera
  • View next camera

Spherical Shots Subpanel

Center Point:

  • Select center of cameras generation

Spherical Distribution Parameters:

  • camera-distance: Distance from center point to cameras
  • Y-axis-min-angle: Minimum horizontal angle (0-360°)
  • Y-axis-max-angle: Maximum horizontal angle (0-360°)
  • X-axis-min-angle: Minimum vertical angle (0-180°)
  • X-axis-max-angle: Maximum vertical angle (0-180°)

Uniform Distribution:

  • Y-axis-steps: Number of horizontal steps
  • X-axis-steps: Number of vertical steps
  • added-noise: Random variation (0-1)
  • Generate cameras uniformly

Random Distribution:

  • number-of-cameras: Total number of cameras to generate
  • Generate cameras randomly

Photogrammetry Generator Panel

Rendering Parameters:

  • lens-distortion: Lens distortion amount (-1 to 1)
  • chromatic-dispersion: Chromatic aberration amount (0-1)
  • render-dir: Output directory for rendered data
  • Render scene
  • Select working Bounding Box
  • Update working Bounding Box

Usage

  1. Set up your 3D scene in Blender
  2. Delete present cameras by pressing Delete all cameras
  3. Create a center point for camera placement
  4. Configure camera parameters
  5. Generate cameras using uniform or random distribution
  6. Preview camera views
  7. Define the working bounding box
  8. Set rendering parameters and output directory
  9. Render the scene

The repository includes some already configured scenes for testing

Output

The plugin generates:

  • RGB images (PNG format)
  • Depth maps (OpenEXR format)
  • Normal maps (OpenEXR format)
  • Scene geometry (PLY format)
  • Camera parameters and metadata (JSON format)

Technical Notes

The defined bounding box when rendering is used for computing metrics and defining the region of interest for the colored mesh representing errors. The bounding box defaults to a unit cube at the center, so if the area of interest on the scene lies outside it, the metrics will not account for it. For that, it's really important to set up the working bounding box to enclose the desired volume. This bounding box can be updated by pressing the UI element in the rendering parameters section.

3D Reconstruction and Evaluation Framework

This framework provides tools for executing different 3D reconstruction methods from images and evaluating their accuracy against ground truth. It supports multiple reconstruction algorithms and includes utilities for adding noise, computing metrics, and visualizing errors.

Overview

The framework consists of two main components:

  • exec_model.py: Main script for executing reconstruction methods and computing initial metrics
  • metrics_mesh.py: Additional script for computing precision/recall/fscore metrics and generating error-colored visualizations

Supported Methods

  • alice: AliceVision
  • colmap: COLMAP
  • nerfacto: SDFStudio with Nerfacto method
  • bakedsdf: SDFStudio with BakedSDF method
  • fds: Fast Dipole Sums

Usage

Firstly, we need to run these commands on the parent folder of the project. The first one to set up AliceVision, and the second one to activate the Python environment:

source env_setup.sh
source env/bin/activate

Running a Reconstruction

python exec_model.py --method METHOD --input_dir INPUT_DIR --out_dir OUT_DIR \
                     --error_threshold ERROR_THRESHOLD --reconstruct RECONSTRUCT \
                     [optional parameters]

Mandatory Parameters

  • --method: Method to use for reconstruction (alice, colmap, fds, nerfacto, bakedsdf)
  • --input_dir: Directory containing input scene (where the "render" folder is located)
  • --out_dir: Path to output directory where results will be stored
  • --error_threshold: Error threshold for evaluation
    • Use percentage format (e.g., 0.05%) for relative error based on bounding box diagonal
    • Use decimal format (e.g., 0.01) for absolute error in world units
  • --reconstruct: Whether to run the reconstruction process
    • True: Run the full reconstruction pipeline
    • False: Only compute metrics on existing reconstruction. This option is useful to test ICP or to regenerate the CSV files without having to execute the full pipeline

Optional Parameters

  • --mono_prior: Whether to use mono-priors (normal & depth) for neural methods (default: False)
  • --inside: Whether the reconstruction is inside a room (True) or a single object outside (False) (default: True). This affects (bakedsdf,fds).
  • --use_icp: Whether to use ICP for alignment (default: False)
  • --use_global_icp: Whether to use Go-ICP for initial alignment (default: False). Useful when the camera reconstruction fails.
  • --export_csv: Whether to export results as CSV files (default: True)
  • --image_noise: Image noise to apply to training images (default: none)
    • none: No noise
    • gaussian#mean#variance: Gaussian noise with specified mean and variance
    • salt_pepper#probability: Salt and pepper noise with specified probability

Computing Additional Metrics and Visualizations

After running a reconstruction, you can generate additional metrics and error-visualized meshes:

python metrics_mesh.py --method METHOD --input_dir INPUT_DIR --out_dir OUT_DIR \
                       --error_threshold ERROR_THRESHOLD [--colormap COLORMAP]

Parameters

  • --method: Same method used for reconstruction
  • --input_dir: Same input directory used for reconstruction
  • --out_dir: Same output directory used for reconstruction
  • --error_threshold: Error threshold for evaluation (same format as in exec_model.py)
  • --colormap: Color map to use for error visualization (default: jet)

Metrics

The framework computes multiple metrics to evaluate the reconstruction quality, depending on the --error_threshold the metrics will be normalized or not. If the threshold is relative, the distance metrics will be normalized by the diagonal of the working bounding box. If the threshold is an absolute value, the metrics will be computed in world units:

Camera and Structure Metrics

  • Camera position error: Mean Euclidean distance between ground truth and reconstructed camera positions after alignment
  • Camera direction error: Mean angular difference (in degrees) between ground truth and reconstructed camera orientations
  • SFM point cloud error: Mean distance between sparse Structure from Motion points and the ground truth mesh

Geometry Metrics

  • Final point cloud error: Mean distance between the dense reconstructed point cloud and the ground truth mesh
  • Final mesh error: Mean distance between points sampled from the reconstructed mesh and their closest points on the ground truth mesh
  • Squared distance error: Mean squared Euclidean distance between reconstructed mesh points and ground truth mesh
  • Hausdorff distance: Maximum distance between any point in the reconstructed mesh and its closest point on the ground truth mesh (worst-case error)
  • Normal direction error: Mean angular difference (in degrees) between surface normals at corresponding points on the reconstructed and ground truth meshes

Depth Metrics

  • Final point cloud depth error: Mean error between reconstructed point clouds and the ground truth depth maps, averaging errors across all cameras that observe each point.
  • Final point cloud depth min error: Mean of the minimum errors for each reconstructed point. For each point, we find the minimum error across all cameras that observe it, then average these minimum values.
  • Depth error: Mean error between point clouds generated from the reconstructed depth maps and the ground truth mesh

Quality Assessment Metrics

  • Precision: Percentage of points in the reconstructed mesh that are within the error threshold distance of the ground truth mesh
  • Recall: Percentage of points in the ground truth mesh that are within the error threshold distance of the reconstructed mesh
  • F-score: Harmonic mean of precision and recall (2 × precision × recall)/(precision + recall)

Axis-Specific Metrics

  • Points axis errors: Mean error along each camera-relative axis (x, y, z), useful for analyzing directional biases in reconstruction

Output

The framework generates the following outputs:

  1. Reconstructed 3D model in the specified output directory
  2. JSON file with summary statistics (stats.json)
  3. CSV files with detailed point-wise error metrics in the metrics folder:
    • inside_bbox_metrics.csv: Metrics for points inside the bounding box
    • inv_inside_bbox_metrics.csv: Reverse metrics (from ground truth to reconstruction)
    • final_point_cloud_metrics.csv: Metrics for the entire point cloud
  4. Reconstructed 3D model centered with respect to the ground truth mesh in the metrics folder
  5. Error-colored mesh visualization showing reconstruction quality with color-coding in the metrics folder

Examples

Basic Reconstruction with COLMAP

python exec_model.py --method colmap --input_dir ./scenes/room1 --out_dir ./results \
                     --error_threshold 0.05% --reconstruct True

Neural Reconstruction with Noise

python exec_model.py --method nerfacto --input_dir ./scenes/statue --out_dir ./results \
                     --error_threshold 0.01 --reconstruct True --inside False \
                     --image_noise gaussian#0#0.01

Visualizing Error Distribution

After running a reconstruction, you can visualize the error distribution on the reconstructed mesh:

python metrics_mesh.py --method colmap --input_dir ./scenes/room1 --out_dir ./results \
                       --error_threshold 0.05% --colormap viridis

This will generate a colored mesh where each point is colored according to its distance from the ground truth, providing an intuitive visualization of where reconstruction errors occur.

License

The project is licensed under MIT License. Please, see the license for further details.

About

A tool for testing and comparing photogrammetry algorithms in simulated environments.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published