本文介绍了从Tango项目生成和导出点云的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

等待了几周后,我终于有了我的探戈计划.我的想法是创建一个可生成房间点云并将其导出到.xyz数据的应用程序.然后,我将使用.xyz文件在浏览器中显示点云!我首先编译并调整了Google github上的点云示例.

After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.

现在,我使用onXyzIjAvailable(TangoXyzIjData tangoXyzIjData)来获得x y和z值的帧;要点.然后,将这些帧以Vector3的形式保存在PCLManager中.扫描完房间后,我可以使用以下命令简单地将所有Vector3从PCLManager写入.xyz文件:

Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:

OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
    String row = String.valueOf(pointCloud.get(i).x) + " "
               + String.valueOf(pointCloud.get(i).y) + " "
               + String.valueOf(pointCloud.get(i).z) + "\r\n";
    os.write(row.getBytes());
}
os.close();

一切正常,不是编译错误或崩溃.唯一似乎出错的是云中点的旋转或平移.当我查看点云时,一切都混乱了.尽管点数与记录的相同,但无法识别我扫描的区域.

Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.

这是否与我不将PoseData和XyzIjData一起使用有关?我对这个主题有点陌生,很难理解PoseData的确切功能.有人可以向我解释一下,并帮助我修复点云吗?

Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?

推荐答案

是的,您必须使用TangoPoseData.

我猜您正在正确使用TangoXyzIjData;但是通过这种方式获得的数据与设备的位置以及拍摄时设备的倾斜方式有关.

I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.

这是我解决此问题的方法:
我从 java_point_to_point_example 开始.在此示例中,他们获得具有2个不同坐标系的2个不同点的坐标,然后将这些坐标写入基础坐标框架对.

Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.

首先,您必须设置您的exstrinsics,因此您将能够执行所需的所有转换.为此,我在setTangoListener()函数末尾调用mExstrinsics = setupExtrinsics(mTango)函数.这是代码(您也可以在上面链接的示例中找到该代码).

First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).

private DeviceExtrinsics setupExtrinsics(Tango mTango) {
    //camera to IMU tranform
    TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
    framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
    framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
    TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
    //IMU to device transform
    framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
    TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
    //IMU to depth transform
    framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
    TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
    return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}

然后,当您到达云点时,您必须对其进行规范化".使用您的外在原理非常简单:

Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:

public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
    ArrayList<Vector3> normalizedCloud = new ArrayList<>();

    TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());

    while (cloud.xyz.hasRemaining()) {
        Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
                new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
                camera_T_imu,
                cameraPose
        );
        normalizedCloud.add(rotatedV);
    }

    return normalizedCloud;
}

这应该足够了,现在您有了一个参考基准点云.如果您过度叠加了两个或多个这种标准化"云,则可以获得房间的3D表示.

This should be enough, now you have a point cloud wrt you base frame of reference.If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.

还有另一种使用旋转矩阵的方法,.

There is another way to do this with rotation matrix, explained here.

我的解决方案相当慢(开发套件需要大约700ms才能归一化约3000点云),所以它不适合用于3D重建的实时应用.

My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.

我正在尝试使用NDK和JNI在C语言中使用Tango 3D重建库.该库有充分的文档记录,但是设置您的环境并开始使用JNI是非常痛苦的. (事实上​​,我目前处于困境中.)

Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).

我想你正在经历一些漂移.
单独使用运动跟踪时会发生漂移:在估算您的姿势时,它会包含很多非常小的误差,这些误差都会共同导致您相对于世界的姿势产生较大的误差.例如,如果您带上探戈设备,然后围成一个圆圈追踪您的TangoPoseData,然后在电子表格中绘制轨迹,或者您想要的任何东西,您会注意到平板电脑永远不会在他的起点返回,因为他在漂移离开.
解决方案是使用 Area Learning .如果您对此主题没有明确的想法,建议您观看Google的对话 I/O2016.它将涵盖很多要点,并为您提供一个不错的介绍.

I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.

使用区域学习非常简单.
您只需在TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION中更改您的基本参照系.通过这种方式,您告诉您的探戈不要在启动应用程序时估算其姿势,而应估算该区域的某个固定点.这是我的代码:

Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.Here's my code:

private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
    new ArrayList<TangoCoordinateFramePair>();
{
    FRAME_PAIRS.add(new TangoCoordinateFramePair(
            TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
            TangoPoseData.COORDINATE_FRAME_DEVICE
    ));
}

现在,您可以照常使用此FRAME_PAIRS.

Now you can use this FRAME_PAIRS as usual.

然后,您必须修改您的TangoConfig才能使Tango使用TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION键使用 Area Learning .请记住,使用TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION时,您不能使用学习模式并加载ADF(区域描述文件).
所以你不能使用:

Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:

  • TangoConfig.KEY_BOOLEAN_LEARNINGMODE
  • TangoConfig.KEY_STRING_AREADESCRIPTION
  • TangoConfig.KEY_BOOLEAN_LEARNINGMODE
  • TangoConfig.KEY_STRING_AREADESCRIPTION

这是我在应用中初始化TangoConfig的方式:

Here's how I initialize TangoConfig in my app:

TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);

使用这种技术,您将消除那些点差.

Using this technique you'll get rid of those spreads.

.此项不再存在(至少在Java API中).改为使用TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION.

. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.

这篇关于从Tango项目生成和导出点云的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-14 00:46