size of posts
rjo__
Posts: 2,114
I have a source file that is about 80k. By the time it becomes an application, the bundle is many megabytes.
What is the forum limit for zip archives?
What is the forum limit for zip archives?
Comments
And to keep stats together, the limit for post content is 15000 characters.
What on Earth have you got in there ?
I did hit the 15K once. Sheesh.
GitHub is scary...
Get a grip. What is the other Smile you have in there.
Github is your friend
Programs are called sketches. In order to write and use a sketch you have to download and install Processing 3. It is all free and cross platform, but it takes time and fiddling. Once you do that you have to go get the latest versions of several libraries that are not included. More fiddling and fussing around. On the other hand, Processing 3 allows you to generate a full applications. But in order to use the application you have to have the right Java installed... or... that can be embedded directly in the application... and whallllah you get 8Megs. For Linux the application is about 1M because Java can't be embedded. I am using a GUI library, with no Mac support, so it isn't fully cross platform.
The attached jpg is a picture of the Propeller2v taking a picture of itself, while the Kinect V2 measures the Propeller's video monitor...
http://forums.parallax.com/discussion/138976/test-poll/p1
Looking at Dropbox now. Fingers crossed.
Still, if it's fun, why not.
And while I am tinkering with it to support the P2v, it could easily be adapted to work with the Prop1, BasicStamp or ... you know, those other controllers.
Currently implemented of general interest:
On a Kangaroo:
Serial... supports up to 460800 baud rate.
Camera ... one USB camera 30 FPS
Kinect V2 around 6 FPS with all the bells and whistles. With just RGBdepth map ... it is reporting 60 FPS. This is probably wrong but it looks pretty good.
There are two choices of libraries supporting the Kinect V2 for Processing 3. They are mutually exclusive because they use different USB3 drivers.
I am using Daniel Shiffman's OpenKinect library, which isn't entirely complete yet, but does have the color mapping functions, which I couldn't get to work in the other library. Daniel's library supports use on the Mac. I haven't done this yet, but the calls should work. My app doesn't work on a mac because the GUI library I am using "controlP5" isn't available for the Mac.
Decoding the depth map to actual 3d measurements isn't readily available in either of these libraries in a way that I could get at, so the 3D measurements you see rely on my own math which isn't complete, but is very simple and gives a reasonable first pass at it. I have a lot on my "to do" list. I wrote my own routine for flipping the raw depth data so that the image isn't mirrored on the screen and so that the depth measurements correspond to the cameras view. Then I found out that there is a call for this, which is much faster... this would probably double the available frame rate.
h... hub buffer to lut
o... do op and move to lut 0
l... lut to hub
i... send image to serial
c... copy cog to (lut thanks Ariba!!!)
r... read lut from serial
a... send 512 bytes array to serial
p... write keyboard to P2 screen.... not implemented yet
As you can see... on the P2 side... the interface loop is P1 code compatible.