Skip to content

gfxdisp/CameraVDP

Repository files navigation

[SIGGRAPH Asia 2025] CameraVDP: Perceptual Display Assessment with Uncertainty Estimation via Camera and Visual Difference Prediction

Yancheng Cai, Robert Wanat, Rafał K. Mantiuk

Project Page | Arxiv

Accurate measurement of images produced by electronic displays is critical for the evaluation of both traditional and computational displays. Traditional display measurement methods based on sparse radiometric sampling and fitting a model are inadequate for capturing spatially varying display artifacts, as they fail to capture high-frequency and pixel-level distortions. While cameras offer sufficient spatial resolution, they introduce optical, sampling, and photometric distortions. Furthermore, the physical measurement must be combined with a model of a visual system to assess whether the distortions are going to be visible. To enable perceptual assessment of displays, we propose a combination of a camera-based reconstruction pipeline with a visual difference predictor, which account for both the inaccuracy of camera measurements and visual difference prediction. The reconstruction pipeline combines HDR image stacking, MTF inversion, vignetting correction, geometric undistortion, homography transformation, and color correction, enabling cameras to function as precise display measurement instruments. By incorporating a Visual Difference Predictor (VDP), our system models the visibility of various stimuli under different viewing conditions for the human visual system. We validate the proposed CameraVDP framework through three applications: defective pixel detection, color fringing awareness, and display non-uniformity evaluation. Our uncertainty analysis framework enables the estimation of the theoretical upper bound for defect pixel detection performance and provides confidence intervals for VDP quality scores.

How to Use CameraVDP from Scratch?

⚠️ Warning: You may need a large amount of storage space (in our experiments, we used more than 100GB for storing RAW images).


I. Camera Calibration Stage

1. Calibrate Camera Intrinsics and Distortion Parameters

  • Calibration is camera-lens-focal length specific. Different combinations require separate calibration.
  • For different lenses, adjust parameters carefully so that the checkerboard occupies roughly 70% of the lens area at a fixed distance.
python Camera_calibration/plot_checker_board.py --square_size 64 --rows 32 --cols 58
  • Capture ARW-format images of the checkerboard from fixed distances and different angles (at least 9 images).
  • Ensure the checkerboard is always fully visible and occupies ≤90% of the image for accurate calibration.

Estimate intrinsics and distortion (square size must match, rows/cols must be smaller by 1 than checkerboard):

python calibrate_checker_board.py   # Set parameters, image paths, and output path first

If using a macro camera (subpixels of the display visible), use:

python calibrate_checker_board_refine.py   # Set parameters, image paths, and output path first

👉 Output: calibration.json or calibration_refine.json containing keys:

  • "ret"
  • "mtx" (camera intrinsics)
  • "dist" (distortion parameters)
  • "re_projection_error"

2. Calibrate Camera MTF (SFR) Curve

  • MTF is also camera-lens-focal length specific.
  • Prepare a high-contrast black/white slanted edge (e.g., Siemens star center, must be optically precise).
  1. Capture ARQ images (Pixel Shift Multi Shooting fused by Sony official app).
  2. Convert to .exr:
python Gather_ARQ_to_Exr_Uncertainty/Uncertainty_gather_raw_arq_to_exr_whole_process_all_HDR.py
  1. Use the generated _mean.exr to compute MTF:
python MTF/compute_MTF.py your_mean.exr   # Adjust exposure parameters to avoid over/under exposure

👉 Output: mtf.json with keys: "R", "G", "B", "Y" (MTF parameters for each channel).


3. Calibrate Camera Vignetting Map

  • Requires a uniform light source (ideally integrating sphere, alternatively a flat-field calibrated display).
python Vignetting/plot_full_screen.py
  • Capture ARQ images at multiple exposures, merge into .exr:
python Gather_ARQ_to_Exr_Uncertainty/Uncertainty_gather_raw_arq_to_exr_whole_process_all_HDR_MTF_wiener.py
  • Recommended: capture 4 positions × 3 exposures = 12 ARQ files, then average.
python Vignetting/generate_Vignetting_map.py

👉 Output: vignetting_map.npz


4. Calibrate Camera Color Matrix

  • Run color checker stimulus:
python Color_correction/plot_color_checker_30colors_each.py --rect_width 2000 --rect_height 2000
  • Measure each patch using both camera and photometer (e.g., specbos):
python Color_correction/specbos_measure.py --output measure_specbos.json
  • Capture ARQ images, merge into .exr:
python Gather_ARQ_to_Exr_Uncertainty/Uncertainty_gather_raw_arq_to_exr_whole_process_all_HDR_MTF_wiener_Vignetting_Undistortion.py
  • Compute mean RGB values:
python Color_correction/compute_mean_RGB_value_from_exr_center_crop.py
  • Find RGB→XYZ matrix:
python Color_correction/find_RGB_to_XYZ_matrix.py

👉 Output: Camera_Colorchecker_RGB2XYZ.json


5. Estimate Camera Noise Parameters

  1. Bright noise model (RAW variance vs. mean → quantum efficiency):
python Camera_calibration/compute_noise_model_bright.py
  1. Dark noise model (lens covered, variance vs. gain → read & ADC noise):
python Camera_calibration/compute_noise_model_dark.py
  1. Fit parameters:
python Camera_calibration/Fit_Noise_Model_together.py

👉 Outputs:

  • Noise_model_bright.json, Noise_model_dark.json
  • Fit_parameters_bright.json, Fit_parameters_dark.json (keys: k, sigma_read, sigma_adc for RGB channels)

⚠️ Add these parameters into HDRutils/merge_uncertainty.py (imread_merge_arq_no_demosaic_uncertainty, rows 173–175).


II. Display Capture Stage

1. Estimate Camera Extrinsics (Homography)

  • Generate OpenCV ArUco patterns:
python Homographic_Transform/plot_opencv_aruco.py
  • Capture ARQ images (fixed camera position), merge into .exr:
python Gather_ARQ_to_Exr_Uncertainty/Uncertainty_gather_raw_arq_to_exr_whole_process_all_HDR_MTF_wiener_Vignetting_Undistortion.py

👉 Output: homography_aruco_result_exr.json


2. End-to-End Capture (RGB mean & variance)

python Gather_ARQ_to_Exr_Uncertainty/Uncertainty_gather_raw_arq_to_exr_whole_process_all_HDR_MTF_wiener_Vignetting_Undistortion_Homo.py

3. Color Conversion (RGB→XYZ, with uncertainty)

python Uncertainty_Color_Correction_after_HT.py

👉 Output: _mean.exr and _cov.npz (XYZ mean and covariance).


III. Display VDP Perceptual Evaluation Stage

1. Perceptual Score with Uncertainty

python run_ColorVideoVDP/run_ColorVideoVDP_uncertainty.py

2. Difference Visibility Heatmap

python run_ColorVideoVDP/run_ColorVideoVDP_heatmap.py

✅ You now have the full CameraVDP pipeline with uncertainty-aware perceptual evaluation.

About

CameraVDP: Perceptual Display Assessment with Uncertainty Estimation via Camera and Visual

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages