Conversation
|
It should work if you implement these changes. You will have to download this repository, apply the changes and rebuild the SDK in Xcode. You can then use the output files in Unity. |
I have applied tha changes and build the SDK. I can see there is a lot files, what files should I move to Unity? |
Every time you change the code of the SDK you need to update the cardboard-xr-plugin too. Please take a look at the following links that explain how to do so: |
|
@maartenslaa thanks a lot for this contribution. I took a look at the code and it looks good, however I suspect this patch might cover another problem. Before the clock patch, the head tracker was not shaky and it might be because it was using another clock and the EKF got samples with a wrong timing which definitely would affect the pose estimation. Now, that we are getting the time right, the pose estimation is not as stable as before. It's not evident to me that just adding a filter to a 6Hz cut off frequency (because of the frequency itself (note that other inputs have smaller cut off frequencies) and the filter itself on an input that was not filtered previously) will solve the problem. I'd like to run a few experiments before making the current architecture more complex than it is. As it was stated in #209 , there might be an issue for iOS specific devices and we are using the same pose estimation for both architectures. I will get back to you with further comments and a proposal of next steps to move forward with this. |
|
No problem! Doing a prediction on data introduces an error, especially when the prediction is done for a single sample in a linear way like it is now. Before the clock patch the prediction didn't occur so the image was a lot smoother. I can imagine the prediction will be better if it is part of the Kalman filter itself since the filter is able to set proper bounds on what the predicted value can be and chooses one with the highest likelihood to be correct. I do have to say my knowledge of the workings of a Kalman filter is very limited. About the 6Hz, I just experimented with different cutoff frequencies and 6 Hz seemed to me one that didn't introduce lag while still having a positive impact on stability. |
|
Looking again at my implementation it is a little strange to filter the current state in ProcessGyroscopeSample() instead of the predicted state from PredictPose(). Maybe changing some of the constants of the EKF is the way to go. |
|
I did a test where I added acceleration to the prediction step. It was like this: dx = vdt, now it is this: dx = vdt + (a*dt^2)/2. It should result in a slightly more accurate prediction. Difference is small since dt is small but I do think it helps a bit. I tested without the velocity filter, the image seems a little bit more real. For this test I save the previous state in the head tracker and use this to predict: Rotation PredictPose(int64_t requested_pose_timestamp, const Vector3 a = (current_state.sensor_from_start_rotation_velocity - previous_state.sensor_from_start_rotation_velocity)/(current_state.timestamp - previous_state.timestamp); const Rotation update = GetRotationFromGyroscope( return update * current_state.sensor_from_start_rotation; |
|
@maartenslaa I checked this patch in Android and the stability improvement can be easily detected, it is good. I'd like to better define the hyper-parameters of the implementation before proceeding (cut-off frequency for example). I'll get back to you soon with a proposal. |
|
BTW, I tested this with the HelloCardboard Android application on a Pixel 4XL with Android 10. |
|
This patch makes the head tracking better. But it still feels like the head is held back of some weird damping force while it's moved. Then when you stop move the head, the tracking catches up like a backlash. Are the output from the gyro really that noisy, or unreliable so it has to be filtered, or predicted? |
|
Hello When looking into the Kalman code of this sdk, it is as good as expected. But regarding this issue of a minute screen blurring, I agree that this issue is related to the unstable rotation matrix eventually producing the display with projection matrix. Instead of applying an external filter (with low cutoff frequency), I would like to utilize the process of Kalman filtering with Quaternion. As you see, in version 1.5.0, PredictionRotation() has "Rotation update = GetRotationFromGyroscope()". Looking inside GetRotationFromGyroscope(), we can see this is making the quaternion only by gyro value (unit: rps or dps ) and turn out the updated quaternion. When you refer to some of Kalman article using quaternion you can see the updated quaternion of q(n+1) = q(n)*dq(n) or q(n+1) = dq(n)*q(n) w.r.t the world to body, or body to world coordinate. We might utilize this process of rotation update like this way double alpha = 0.4; // determined by try & error. I wish this can help you. Thanks |
|
Hi, |
|
Your proposed changes have been already internally merged and will be included in the next release. Thanks for contributing to Cardboard! |
|
This was merged in v1.26.0. |
When prediction is implemented in v1.5.0 as in this patch, the view can appear to be a bit unstable. I think the reason for this is because the velocity that is used to predict movement is taken by extrapolating from a single sample point. This causes frequencies higher than possible by human neck movement to get through in the predicted state. My solution for this is to push the velocity vector through a low pass filter. I tried a couple of cutoff frequencies and 6 Hz works well for me.
My knowledge of a Kalman filter is limited, however I can imagine a better result if a predicted value can be taken from that instead of extrapolating from the last point.