Add support for Lidar sensor#1424
Conversation
Add Lidar sensor in AirLib object model as well as Unreal
implementation using ray-casting. Add python client example.
Future work:
- Enable configuration of lidar parameters
- Enable visualization of point-cloud on client side
Resolves: microsoft#621
Reference: microsoft#621
Add Lidar sensor in AirLib object model as well as Unreal
implementation using ray-casting. Add python client examples.
Future work:
- Enable configuration of lidar parameters
- Enable visualization of point-cloud on client side
Resolves: microsoft#621
…to feature/lidar
sytelus
left a comment
There was a problem hiding this comment.
Looks really good! Most of the comments are just minor nitpicks. I think we are very close to merging these changes.
Unreal/Plugins/AirSim/Source/UnrealSensors/UnrealLidarSensor.cpp
Outdated
Show resolved
Hide resolved
Unreal/Plugins/AirSim/Source/UnrealSensors/UnrealLidarSensor.cpp
Outdated
Show resolved
Hide resolved
|
|
||
| // calculate end position (params.range is expected to be in meters) | ||
| constexpr float TO_CENTIMETERS = 1e+2f; | ||
| FVector end = (params.range * TO_CENTIMETERS) * UKismetMathLibrary::GetForwardVector(resultRot) + start; |
There was a problem hiding this comment.
I'm wondering if this computation can be bit cleaned up. We have 3 coordinate frames: world frame, vehicle frame, lidar frame. The usual way to do computation is to start with inner most frame and then ultimately get the vector in world frame. Also, it's often useful to do all computation in just one system (for example, either NED or UE). Usually we just keep everything in NED system and convert to UE system only when UE API call is needed. This keeps code portable to other engines (we have Unity port in works).
Below is just pseudo code. The suffix _w, _b and _l are used for world, body and lidar frames.
We have:
ray_h_l, ray_v_l = horizontal and vertical angles for ray in lidar frame
lidar_pose_b = lidar pose relative to vehicle body
vehicle_pose_w = vehicle pose relative to world frame
//get ray quaternion in lidar frame (angles must be in radians)
ray_q_l = VecorMath::toQuaternion(ray_v_l, 0, ray_h_l)
//get ray quaternion in body frame
ray_q_b = VectorMath::rotateQuaternion(ray_q_l, lidar_pose_b.orientation)
//get ray quaternion in world frame
ray_q_w = VectorMath::rotateQuaternion(ray_q_b, vehicle_pose_w.orientation)
//get ray vector
ray_w = VectorMath::rotateVector(VectorMath::front() * max_distance, ray_q_w)
+ lidar_pose_b.position + vehicle_pose_w.position
Unreal/Plugins/AirSim/Source/Vehicles/Multirotor/MultirotorPawnSimApi.cpp
Outdated
Show resolved
Hide resolved
|
Oh, this is great; thanks for doing it! One thought: instead of simulating each individual laser firing, consider using the existing sim depth sensor, and resampling from the fixed-width / fixed-height sample deltas that one gets from a Kinect / flash lidar / existing sim-depth-sensor to the fixed-angle deltas one gets with a spinning lidar like a traditional Velodyne. That would presumably provide a giant speedup, since you could use the existing (GPU-based) render pipeline, vs. performing all the collision calculations on the CPU. Cheers! |
1 - Change coordinate math in UnrealLidarSensor to use AirLib vector math. 2 - Move draw support to SimMode/SimModeBase. 3 - Fix coding style/convention discrepancies.
@KerryMoffittRtn -- I agree with the goal to reduce the number of lasers fired or the number of calcs done. However, I am unable to understand the solution that you are proposing above. Apologies for being dense. Can you clarify the above resampling approach in bit more detail? Cheers. |
|
Sure! As I understand your approach, you cast a ray per laser firing, and test for collision in the world geometry. This seems sound, and will presumably provide the functionality you desire, though it means asking Unreal to navigate the geometry of the scene to perform collision detection (presumably on the CPU) for every ray cast. To understand the alternative approach I am describing, first consider the existing depth capture mechanism (see ImageType::DepthVis, USceneCaptureComponent2D, and the picture-in-picture that the '1' key toggles). It works a lot like the standard scene renderer (heavily optimized, and taking full advantage of the parallelism offered by the GPU) except that it yields depth information instead of color information. Now this is super fast, and outputs a 2D array of depth samples (lots of samples, i.e.) at once, but they're not geometrically structured the way you want: the rays all pass through a focal plane, on which we have a fixed width and height between adjacent samples. What you want for a spinning lidar is fixed angle between adjacent samples. So you would need to reorganize the data by resampling, and the cost of this resampling (though it may not be terrible) would need to be weighed against the fast performance of the initial depth sampling. (You'd also need to collect samples on multiple planes, of course - at least three - to cover all 360 degrees of a circle.) This is a little tricky, I realize. Happy to chat more if I'm still not making sense. :) |
|
This is a very useful feature. If performance needs to be optimized, it would probably be helpful to ask in the unreal forums how to most efficiently implement this. Also, you might find some useful code in the Nvidia Waveworks project. It uses ray casting (on the gpu) to determine bouyancy of actors at various locations on the map. |
| #include "common/CommonStructs.hpp" | ||
| #include "common/ImageCaptureBase.hpp" | ||
| #include "sensors/SensorCollection.hpp" | ||
| #include "sensors/Lidar/LidarBase.hpp" |
There was a problem hiding this comment.
Should be #include "sensors/lidar/LidarBase.hpp", shouldn't it?
There was a problem hiding this comment.
I think you're right, I was having build errors, replaced the suggested line and now it is working. Thanks.
There was a problem hiding this comment.
Ah, Linux issue... apologies... fixing now.
|
@KerryMoffittRtn Is that how Gazebo does it? I think I saw something like that for the velodyne simulator in Gazebo and I was a bit confused as to why we would do it using cameras. Is there no way of manually raytracing on the gpu without going through the rendering pipeline? |
Add Lidar sensor in AirLib object model as well as Unreal
implementation using ray-casting. Add python client example.
Future work:
- Enable configuration of lidar parameters
- Enable visualization of point-cloud on client side
Resolves: #621