Shop OBEX P1 Docs P2 Docs Learn Events
How do you handle FPS formats? — Parallax Forums

How do you handle FPS formats?

lardomlardom Posts: 1,659
edited 2012-04-26 22:00 in Propeller 1
It just occurred to me that there might be a syncing problem with film @24fps vs. NTSC @30fps. There is also "avi.", DVD, PAL etc. I've never thought about it in engineering terms. I only noticed a sync problem when seeing a monitor on another monitor. How do you solve the phase problem?
(Since the Propeller can do video I decided to ask the question here.)

Comments

  • ericballericball Posts: 774
    edited 2012-04-26 09:37
    You're covering a lot of ground here and most of it not Prop specific. What you're asking about comes under the general heading of transcoding. i.e. I have this video in this format and want it in a different format.

    24fps to 30/60Hz NTSC is typically handled via a process called telecine, or 3:2 pulldown. The basic idea is you show one frame for two 60Hz fields, then the next frame for three 60Hz fields. (Where each field is half the lines of an interlaced frame.) Handling the 0.1% difference between 60Hz and the actual 59.94Hz field rate can be done by either speeding up the audio or adding an extra field at the right interval. 25/50Hz PAL is often done by simply speeding up the playback or a similar pulldown idea.

    Resolution differences are next, i.e. converting from 720x480 NTSC DVD or 720x576 DVD to 640x480 VGA. The first thing to consider is aspect ratio - or how square the pixels or the screen is. DVD will either be a 4:3 screen or 16:9 and depending upon the desired output you can either letterbox or windowbox or take what you want out of the middle. Next is to handle the differences in number of pixels. The easy way is to only use every n'th pixel (or repeat pixels). However better results are achieved if the pixels are filtered first. A related (but more annoying) issue is if each pixel isn't full color. DVD is 4:2:0 sampled, so the color difference information is halved in both the horizontal and vertical directions.

    Next is codecs, i.e. how the video or audio is compressed. For the most part this isn't too difficult - decode the video to raw pixels then re-encode as necessary. The challenge is trying to reduce or eliminate any steps which might cause quality issues - i.e. YCrCb to RGB to YCrCb conversions.

    Finally the container format (i.e. AVI). Again, this is usually fairly simple as the purpose of the container is simply to organize the audio and video data into a single file. The difficulty comes if the container isn't suited for the codec (i.e. using AVI for anything current) and avoiding synchronization issues.
  • lardomlardom Posts: 1,659
    edited 2012-04-26 10:13
    Thanks ericball, I didn't know how big the subject was. (I thought I could avoid a discussion of aspect ratios). I've never seen words like 'telecine, telecode, 3:2 pulldown etc. I'm even more curious than I was before. I'll research the subject until I get a clear understanding of the processes. Thanks.
  • JonnyMacJonnyMac Posts: 9,197
    edited 2012-04-26 10:30
    It get's wacky. I have a video camera that shoots "advanced" 2:3:3:2 pull-down to allow true 24p editing for something that will ultimately be displayed on an NTSC monitor.
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2012-04-26 11:02
    JonnyMac wrote: »
    It get's wacky. I have a video camera that shoots "advanced" 3:2:2:3 pull-down to allow true 24p editing for something that will ultimately be displayed on an NTSC monitor.

    Wacky, yes, but specific to NTSC DV. I figured you for a native 24p man. OTOH, I can't even afford the lens cap for one of those cameras...

    -- Gordon
  • ericballericball Posts: 774
    edited 2012-04-26 11:28
    Having a camera shoot 2:3:3:2 might help in some editing & transcoding scenarios as it means each 24fps frame is stored in a 30fps frame with one blended frame out of 5. However it might introduce some judder artifacts.
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2012-04-26 12:31
    ericball wrote: »
    However it might introduce some judder artifacts.

    It does, but 24pA isn't really meant as a display format, just for editing (with compatible editing software). Panasonic wasn't never really clear about that aspect and it caused confusion. The workflow is meant to efficiently convert to 24p in a NLE, then it can be finished in any way desired (not just NTSC). Once pulldown is removed, merely by ignoring the 5th and 6th field in the cadence, and then deinterlacing, the frames are just like any 24p material. From what I recall, there are no field dominance or other issues to contend with.

    -- Gordon
  • JonnyMacJonnyMac Posts: 9,197
    edited 2012-04-26 12:32
    It turns out there are no blended frames when 2:3:3:2 is brought into a 24p timeline -- which is why I always use(d) this when shooting on my DVX100. I'm now shooting DSLR.

    24pAdvanced.gif

    Sorry, I've helped take this thread a bit off track. That said... I've had several EFX-TEK customers ask me when we're going to design a "cheap" video player with the Propeller. Answer: not any time soon. As with many technologies, it's harder than it looks.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-04-26 12:47
    The term "judder" drove me to Google -- never heard of it before.

    It seems to me that the obvious solution to the 3:2:2:3 issue is to record everything at 120 fps. Then you can down-sample as needed for both 24 fps and 30 fps without introducing uneven sampling.

    -Phil
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2012-04-26 13:51
    It seems to me that the obvious solution to the 3:2:2:3 issue is to record everything at 120 fps.

    My calculator blew up trying to figure out the bitrate for a 4K resolution stream at 120 fps.

    In any case, NTSC is a "legacy" format so I don't think it makes much difference to accommodate for it in any special way. Shooting at 24p and bumping up seems to suit most NTSC applications. All the cameras and workflows now support that.

    Jon, I've been blown away with the video features on the new DSLRs. Do you recognize the camera these guys are using? Have you ever used one of these Fig Rig jobbies? It seems to work very well, if you look at the finished work, also on YouTube. I figure if anyone knew this stuff, you would.

    -- Gordon
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-04-26 14:06
    My calculator blew up trying to figure out the bitrate for a 4K resolution stream at 120 fps.
    How about 2500+ fps at 1080p, then?

    -Phil
  • JonnyMacJonnyMac Posts: 9,197
    edited 2012-04-26 14:37
    Jon, I've been blown away with the video features on the new DSLRs. Do you recognize the camera these guys are using? Have you ever used one of these Fig Rig jobbies?

    Hard to tell what the camera is, but based on the logo on the young man's shirt I'm betting it's a Canon 5d MarkII, perhaps a 7D (I use the 60D).

    I absolutely LOVE my FigRig and use it all the time. Funny, I have taken it out to sets and parties where pros (i.e., much better at shooting movies than me -- I'm an actor) are present and they're always blown away by the "steering wheel." As camera rigs are getting smaller the FigRig is gaining in popularity. It's great for the run-and-gun stuff I tend to do.

    Here's me in a tub with Lauren De Long (you may recognize her from a Microsoft commercial when they were trying to be cool) shooting on the aforementioned DVX100.

    jon_with_figrig.jpg

    And here's my pal, Lynda Reynoso with my FigRig loaded for a shoot that we did about a year ago (my first DLSR project) -- you can see it here: http://www.youtube.com/watch?v=_2hGic9hh94. We did this as an exercise; an attempt to shoot and finish a short piece in a day (we did, but it was a long day).

    lyn_with_figrig.jpg

    That little piece is, essentially, a monologue so I shot it on a FigRig; this allowed me to act (without dialog) while holding the camera on Lynda. Also mounted to rig is a shotgun mic and a Zoom H4n audio recorder. DLSRs shoot great images and video but do not record sound well.

    @Phil: I think James Cameron is on record for suggestion we should shoot and project at 120FPS for a more realistic experience. Funny, 24 FPS was born out of economics (practical length of celluloid film) and has now become an aesthetic standard that most digital shooters attempt to emulate (mostly in post production).
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2012-04-26 17:52
    Jon, Very cool video. Thanks for sharing!

    Don't know if you're a Mitchell and Webb fan, but I figured they used something similar (strapped to Webb's waist or something?) on the reverse POV shots for 'Sir Digby Chicken Caesar'. The effect is hilarious.

    -- Gordon
  • JonnyMacJonnyMac Posts: 9,197
    edited 2012-04-26 18:04
    I love moving shots, but have never liked those strap on rig shots -- we don't see things in life that way so it's use has to be done with care. As in this piece, it's usually for a specific effect.

    Pause the video at 0:23 and you'll see something we all face: reflection problems. When we shot our little Doritos commercial for the Super Bowl contest we had that, too. Luckily, our DP and Director, Peter Montgomery (PJMonty here in the forums), was able to remove the offending reflections and errant equipment with After Effects. In a past life he did special effects work for Disney and is an expert compositor.
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-04-26 18:08
    For cheap cameras, even my Panasonic Lumix DMC-FZ28 ($300 3 years ago) shoots very good 720p video, though the format is a series of JPEG images in a .mov container. That's perfect for editing because you don't get artifacts for not cutting on keyframes, since every frame is a keyframe.
  • JonnyMacJonnyMac Posts: 9,197
    edited 2012-04-26 19:03
    Compression is an issue when shooting with DLSRs, especially with the H.264 format. I use a product called Cineform Neoscene to convert the MOV files from the camera to an AVI with an expanded color space (the Cineform code is used in "the industry" for hi-def work); this allows for less banding in post. When I shot the little beach video with Lynda I didn't have or know about this tool and my editor (Sony Vegas) was constantly chocking on the MOV file. We got there but it was rough.

    Back in the Propeller world, it will be interesting to see if it has the horsepower to play low-compression video from external media. Chip told me that he'll be writing a driver for an SD card, but I think it may take parallel access to get the bandwidth up enough to be meaningful.
  • lardomlardom Posts: 1,659
    edited 2012-04-26 22:00
    JonnyMac, interesting discussion. I have a Canon 50D which I use for lenticular 3D photography. That gives me confidence that Canon makes a quality camera.
Sign In or Register to comment.