-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
Hello,
it seems like there's either a methodical issue in how the IMU initialization operates or I'm not understanding things properly:
In all of the Optimizer::InertialOptimization() methods, a Map of Keyframes is used as an input.
I understand the coordinate frame of those keyframes to be the Camera Frame, while the inertial optimization works in the IMU frame.
To translate between the two coordinate frames, an ImuCamPose adapter is used. This adapter uses the constant, metric offset between camera and IMU, as read from calibration.
However, at least in the monocular case, the camera scale is unknown at the time of optimization -- obviously since scale is part of the estimation. But this would also make the scale of the translation part of the IMU-camera offset unknown.
In other words, the optimization seems to use wrong IMU poses as an input, because those poses were derived using a camera-IMU offset that has an unknown scale in it.
This is probably not a huge issue if the camera-IMU offset is physically small and the initial camera scale estimate is already accurate but it may lead to poor inertial-only estimates the further those assumptions are off.