I have a question regarding the use of the onboard position estimator/control and the observer parameters. We are trying to control the Asctec Pelican using the onboard position controller with a ground camera, which provides the pose of the quadrotor in the world frame. This pose is quite stable, when the drone is hanging still in the air, the deviation of the position is about +- 1 cm. This pose is fed into the HL pose estimator. The filtered position output is also very stable and accurate. However, the acceleration bias and the filtered velocities are out of bounds. The acceleration bias are always in the range of -0.3 - -0.5 and the filtered velocities in the range of +- 0.25. When trying to hover with this setup, the drone starts to deflect heavily in one direction, probably to counter the velocity offset. This is not surprising. However, we are unable to bring the acceleration bias in an acceptable bound. When decreasing the pole of the filter bias, the filtered velocities offset decreases to a more acceptable bound (+- 0.1 - 0.15). The drone however starts oscillating heavily very quickly when trying to hover.
Are there any suggestions how we could achieve a stable hovering using the external camera an IMU? We are not expecting the quadrotor to keep the position with 1cm accuracy, some 20-30 cm radius would be acceptable, however without oscillations.
I don't know exactly what happened in your case. If your position estimation is accurate, you should get good speed estimation. Are you sure your implementation is right (sign, axis) ? Perhaps you can try writing your own sensor fusion framework by fusing the position and IMU data within a Kalman Filter.
Thanks for your response. I will check everthing, however, as far as I know the problem of the huge acceleration bias still remains, independent on the implementation. If I keep the quadrotor perfectly steady, the acceleration bias and velocities have a large offset. Shouldn't errors in axis/signs only play a role when the quadrotor is moving?
In fact, the signs are important in all cases. It is strange that the observer would give a biased velocity estimation. I don't know much about the observer, so I can tell the reasons for this.
We have a motion tracking system (VICON) in our lab. What I did is processing the position information from VICON using an EKF to get speed. Then the drone can be controlled using a PD controller. I mean, since you have position estimation, you might try implementing your own sensor fusion algorithm and design a simple PID controller.
You can email me if you want to talk in detail: firstname.lastname@example.org