Shop OBEX P1 Docs P2 Docs Learn Events
How could I measure spatial displacement with the propeller in real time? — Parallax Forums

How could I measure spatial displacement with the propeller in real time?

I've recently put together a very simple 3d engine for the propeller, and using a 2125 accelerometer and compass module, I was able to control the camera rotation of the 3d graphics. I was wondering what options I might have to add in full motion control so I can move the camera as well as rotate it. I figured this sensor looked like it might be useful for some of the sensor processing, but I didn't see that it could calculate displacement, so I wondered, has anyone worked with something similar? I'd like to hear what you used so I can see what would work best. Thanks!

Comments

  • You could, in theory, integrate the accelerometer values over time to get velocity, and then integrate the velocity values to get position. In practice that's incredibly unreliable.

    The only way to really figure out physical displacement is with some fixed points of reference and triangulation. You could extract the camera element from a Wiimote and use the output of it as a cheap motion-capture setup, like this:


  • While that is a good idea, I have considered it, and figured it wouldn't be ideal, as I planned for the viewer to be self contained. I realized the double integration causes problems, but since I have seen people use IMUs to track displacement reasonably accurately, I figure it should be possible with proper filtering. How that would be accomplished is the part that requires more information.
  • A nephew of mine has been pursuing the answer to this question as part of his post-graduate work. He has reached the same conclusion as Jason: Triangulation from fixed references. He is focusing now (no pun intended) on natural references and the algorithms that select and track them and analyze their apparent motion.
  • For very specific cases where you know your usage well enough, you might be able to use certain points in time as "reset" points. For example, if you were tracking the users feet, you could assume that once they hit the ground they're stationary, and could zero your integrated velocity.

    In the absence of that, you're going to develop velocity drift and the second integration will make your position totally wrong pretty quickly.
  • By totally wrong, how wrong do you mean?

    Will the orientation off by a few degrees, or will it eventually return a random direction?

    Will the displacement be off by centimeters per millimeter, or will it be off by kilometers per centimeter?

    Maybe we can work with smaller errors while the errors remain small?
  • Orientation can be corrected without much trouble. Position is MUCH harder because without an absolute frame of reference, you have no idea what your actual velocity is. If you integrate acceleration to get velocity, you will accumulate small errors and they stay there.

    Then you integrate THAT value over time to produce a position estimate, which means that over time your error gets larger.

    That all assumes that your accelerometer readings have no offset themselves and are perfectly calibrated, which won't be the case. Any offset error in your accelerometer readings gets accumulated into the velocity estimate, so the error grows linearly. That error is then accumulated into your position estimate, meaning the error is compounded quadratically.

    It becomes significantly worse when you add orientation to the mix. Because your orientation is an estimate, and you're using that to correct the acceleration vector, you introduce additional error, which translates into addition velocity error, which multiplies over time into worse position estimation.

    If you can be sure that your sensor will "dead stop" at somewhat regular intervals, and you can detect this, you can use it to reset your integrated velocity value. Otherwise the position will accumulate the velocity error and continuously get worse. Even this dead-stop technique doesn't correct your position error, but it will at least prevent it from getting worse when not moving.
  • JasonDorie wrote: »
    You could extract the camera element from a Wiimote and use the output of it as a cheap motion-capture setup, like this:

    There's code to do this for the Propeller. IMO, it's very cool.

    I was amazed at how well the cheap MPU6050 modules worked with the I2Cdevlib code. These modules are are just a few dollars on ebay. I didn't use a magnetometer but even without a magnetometer the yaw heading stayed very consistent.

    The BNO055 sensor is supposed to be good for this sort of stuff. I have one but I haven't played with it much.
  • This demo takes 1 cog for the sensor reads, and one cog for the IMU / FPU:



    This is the IMU code for the upcoming Elev8 flight controller. I internally use double integration of the accelerometer data to compute vertical velocity and fuse that with the altimeter readings so they correct each other. Works reasonably well, but the altimeter is a (more or less) fixed reference, which is what you need to make it work.
  • Yaw without a magnetometer will drift significantly faster if you keep spinning the sensor around, its
    easier to trim out the static error than dynamic non-linearities.
Sign In or Register to comment.