August 16, 2014

Transcoding Modern Formats

I've noticed that this blog still gets a decent amount of traffic, particularly to some of the older articles about transcoding. Since I've been working on a tool in this space recently, I thought I'd write something up in case it helps folks unravel how to think about transcoding these days.

The tool I've been working on is EditReady, a transcoding app for the Mac. But why do you want to transcode in the first place?

Dailies

After a day of shooting, there are a lot of people who need to see the footage from the day. Most of these folks aren't equipped with editing suites or viewing stations - they want to view footage on their desktop or mobile device. That can be a problem if you're shooting ProRes or similar.

Converting ProRes, DNxHD or MPEG2 footage with EditReady to H.264 is fast and easy. With bulk metadata editing and custom file naming, the management of all the files from the set becomes simpler and more trackable.

One common workflow would be to drop all the footage from a given shot into EditReady. Use the "set metadata for all" command to attach a consistent reel name to all of the clips. Do some quick spot-checks on the footage using the built in player to make sure it's what you expect. Use the filename builder to tag all the footage with the reel name and the file creation date. Then, select the H.264 preset and hit convert. Now anyone who needs the footage can easily take the proxies with them on the go, without needing special codecs or players, and regardless of whether they're working on a PC, a Mac, or even a mobile device.

If your production is being shot in the Log space, you can use the LUT feature in EditReady to give your viewers a more traditional "video levels" daily. Just load a basic Log to Video Levels LUT for the batch, and your converted files will more closely resemble graded footage.

Mezzanine Formats

Even though many modern post production tools can work natively with H.264 from a GoPro or iPhone, there are a variety of downsides to that type of workflow. First and foremost is performance. When you're working with H.264 in an editor or color correction tool, your computer has to constantly work to decompress the H.264 footage. Those are CPU cycles that aren't being spent generating effects, responding to user interface clicks, or drawing your previews. Even apps that endeavor to support H.264 natively often get bogged down, or have trouble with all of the "flavors" of H.264 that are in use. For example, mixing and matching H.264 from a GoPro with H.264 from a mobile phone often leads to hiccups or instability.

By using EditReady to batch transcode all of your footage to a format like ProRes or DNxHD, you get great performance throughout your post production pipeline, and more importantly, you get consistent performance. Since you'll generally be exporting these formats from other parts of your pipeline as well - getting ProRes effects shots for example - you don't have to worry about mix-and-match problems cropping up late in the production process either.

Just like with dailies, the ability to apply bulk or custom metadata to your footage during your initial ingest also makes management easier for the rest of your production. It also makes your final output faster - transcoding from H.264 to another format is generally slower than transcoding from a mezzanine format. Nothing takes the fun out of finishing a project like watching an "exporting" bar endlessly creep along.

Modernization

The video industry has gone through a lot of digital formats over the last 20 years. As Mac OS X has been upgraded over the years, it's gotten harder to play some of those old formats. There's a lot of irreplaceable footage stored in formats like Sorensen Video, Apple Intermediate Codec, or Apple Animation. It's important that this footage be moved to a modern format like ProRes or H.264 before it becomes totally unplayable by modern computers. Because EditReady contains a robust, flexible backend with legacy support, you can bring this footage in, select a modern format, and click convert. Back when I started this blog, we were mostly talking about DV and HDV, with a bit of Apple Intermediate Codec mixed in. If you've still got footage like that around, it's time to bring it forward!

Output

Finally, the powerful H.264 transcoding pipeline in EditReady means you generate beautiful deliverable H.264 more rapidly than ever. Just drop in your final, edited ProRes, DNxHD, or even uncompressed footage and generate a high quality H.264 for delivery. It's never been this easy!

See for yourself

We released a free trial of EditReady so you can give it a shot yourself. Or drop me a line if you have questions.

Posted by at 10:23 PM

December 7, 2012

Moving Along

After many years on UThink, this blog now lives at www.discretecosine.com. And it's semi-active again!

Posted by at 12:12 PM

January 21, 2011

An introduction to reverse engineering

(This blog is still in hibernation, but I needed somewhere to post this)

Reverse engineering is one of those wonderful topics, covering everything from simple "guess how this program works" problem solving, to poking at silicon with scanning electron microscopes. I'm always hugely fascinated by articles that walk through the steps involved in these types of activities, so I thought I'd contribute one back to the world.

In this case, I'm going to be looking at the export bundle format created by the Tandberg Content Server, a device for recording video conferences. The bundle is intended for moving recordings between Tandberg devices, but it's also the easiest way to get all of the related assets for a recorded conference. Unfortunately, there's no parser available to take the bundle files (.tcb) and output the component pieces. Well, that just won't do.

For this type of reverse engineering, I basically want to learn enough about the TCB format to be able to parse out the individual files within it. The only tools I'll need in this process are a hex editor, a notepad, and a way to convert between hex and decimal (the OS X calculator will do fine if you're not the type to do it in your head).

Step 1: Basic Research
After Googling around to see if this was a solved issue, I decided to dive into the format. I brought a sample bundle into my trusty hex editor (in this case Hex Fiend).

1-1.jpg

A few things are immediately obvious. First, we see the first four bytes are the letters TCSB. Another quick visit to Google confirms this header type isn't found elsewhere, and there's essentially no discussion of it. Going a few bytes further, we see "contents.xml." And a few bytes after that, we see what looks like plaintext XML. This is a pretty good clue that the TCB file consists of a . Let's scan a bit further and see if we can confirm that.
1-2.jpg
In this segment, we see the end of the XML, and something that could be another filename - "dbtransfer" - followed by what looks like gibberish. That doesn't help too much. Let's keep looking.
1-3.jpg
Great - a .jpg! Looking a bit further, we see the letters "JFIF," which is recognizable as part of a JPEG header. If you weren't already familiar with that, a quick google for "jpg hex header" would clear up any confusion. So, we've got the basics of the file format down, but we'll need a little bit more information if we're going to write a parser.

Step 2: Finding the pattern
We can make an educated guess that a file like this has to provide a few hints to a decoder. We would either expect a table of contents, describing where in the bundle each individual file was located, some sort of stop bit marking the boundary between files, byte offsets describing the locations of files, or a listing of file lengths.

There isn't any sign of a table of contents. Let's start looking for a stop bit, as that would make writing our parser really easy. Want I'm going to do is pull out all of the data between two prospective files, and I want two sets to compare.
I've placed asterisks to flag the bytes corresponding to the filenames, since those are known.

1E D1 70 4C 25 06 36 4D 42 E9 65 6A 9F 5D 88 38 0A 00 *64 62 74 72 61 6E 73 66 65 72* 42 06 ED 48 0B 50 0A C4 14 D6 63 42 F2 BF E3 9D 20 29 00 00 00 00 00 00 DE E5 FD

01 0C 00 *63 6F 6E 74 65 6E 74 73 2E 78 6D 6C* 9E 0E FE D3 C9 3A 3A 85 F4 E4 22 FE D0 21 DC D7 53 03 00 00 00 00 00 00

The first line corresponds to the "dbtransfer" entry, the second to the "contents.xml" entry. Let's trim the first entry to match the second.

38 0A 00 *64 62 74 72 61 6E 73 66 65 72* 42 06 ED 48 0B 50 0A C4 14 D6 63 42 F2 BF E3 9D 20 29 00 00 00 00 00 00

01 0C 00 *63 6F 6E 74 65 6E 74 73 2E 78 6D 6C* 9E 0E FE D3 C9 3A 3A 85 F4 E4 22 FE D0 21 DC D7 53 03 00 00 00 00 00 00

It looks like we've got three bytes before the filename, followed by 18 bytes, followed by six bytes of zero. Unfortunately, there's no obvious pattern of bits which would correspond to a "break" between segments. However, looking at those first three bytes, we see a 0x0A, and a 0x0C, two small values in the same place. 10 and 12. Interesting - the 12 entry corresponds with "contents.xml" and the 10 entry corresponds with "dbtransfer". Could that byte describe the length of the filename? Let's look at our much longer JPG entry to be sure.

70 4A 00 *77 77 77 5C 73 6C 69 64 65 73 5C 64 37 30 64 35 34 63 66 2D 32 39 35 62 2D 34 31 34 63 2D 61 38 64 66 2D 32 66 37 32 64 66 33 30 31 31 35 65 5C 74 68 75 6D 62 6E 61 69 6C 73 5C 74 68 75 6D 62 6E 61 69 6C 30 30 2E 6A 70 67*

0x4A - 74, corresponding to a 74 character filename. Looks like we're in business.

At this point, it's worth an aside to talk about endianness. I happen to know that the Tandberg Content Server runs Windows on Intel, so I went into this with the assumption that the format was little-endian. However, if you're not sure, it's always worth looking at words backwards and forwards, just in case.

So we know how to find our filename. Now how do we find our file data? Let's go back to our JPEG. We know that JPEGs start with 0xFFD8FFE0, and a quick trip to Google also tells us that they end with 0xFFD9. We can use that to pull a sample jpeg out of our TCB, save it to disk, and confirm that we're on the right track.
2-2.jpg

This is one of those great steps in reverse engineering - concrete proof that you're on the right track. Everything seems to go quicker from this point on.

So, we know we've got a JPEG file in a continuous 2177 byte segment. We know that the format used byte lengths to describe filenames - maybe it also uses byte lengths to describe file lengths. Let's look for 2177, or 0x8108, near our JPEG.

2-3.jpg

Well, that's a good sign. But, it could be coincidental, so at this point we'd want to check a few other files to be sure. In fact, looking further in some file, we find some larger .mp4 files which don't quite match our guess. It turns out that file length is a 32bit value, not a 16bit value - with our two jpegs, the larger bytes just happened to be zeros.

Step 3: Writing a parser

"Bbbbbut...", I hear you say! "You have all these chunks of data you don't understand!"

True enough, but all I care about is getting the files out, with the proper names. I don't care about creation dates, file permissions, or any of the other crud that this file format likely contains.

3-1.jpg

Let's look at the first two files in this bundle. A little bit of byte counting shows us the pattern that we can follow. We'll treat the first file as a special case. After that, we seek 16 bytes from the end of file data to find the filename length (2 bytes), then we're at the filename, then we seek 16 bytes to find the file length (4 bytes) and seek another 4 bytes to find the start of the file data. Rinse, repeat.

I wrote a quick parser in PHP, since the eventual use for this information is part of a larger PHP-based application, but any language with basic raw file handling would work just as well.

tcsParser.txt
This was about the simplest possible type of reverse engineering - we had known data in an unknown format, without any compression or encryption. It only gets harder from here...

Posted by at 2:49 PM

June 3, 2010

A nice comparison of mics

DVEStore has done a great comparison of different types of microphones on video. Audio is a black art, and folks rarely put in the time to do A/B/C comparisons. We tend to just default to a set of mics that we've decided are "good enough" and then don't go back to reevaluate.

Posted by at 1:08 PM | News

April 29, 2010

Presentation on the HTML5 video tag

A few weeks back, I was given the opportunity to present at MinneWebcon. My talk, "<video> will be your friend" focused on the legal issues and implementation possibilities surrounding the HTML5 video tag.

I've put my slides online, if you want to take a look. I've also recorded the first half the of the lecture as part of a test of our Mocha class capture application. I'll be recording the second half Real Soon Now.

Posted by at 12:41 PM

April 15, 2010

NAB 2010 wrapup

Another year of NAB has come and gone. Making it out of Vegas with some remaining faith in humanity seems like a successful outcome. So, anything worth talking about at the show?

First off, there's 3d. 3D is The Next Big Thing, and that was obvious to anyone who spent half a second on the show floor. Everything from camera rigs, to post production apps, to display technology was all 3d, all the time. I'm not a huge fan of 3d in most cases, but the industry is at least feigning interest.

Luckily, at a show as big as NAB, there's plenty of other cool stuff to see. So, what struck my fancy?

First off, Avid and Adobe were showing new versions of Media Composer and Premiere. Both sounded pretty amazing on paper, but I must say I was somewhat underwhelmed by both in reality. Premiere felt a little rough around the edges - the Mercurial Engine wasn't the sort of next generation tech that I expected. Media Composer 5 has some nice new tweaks, but it's still rather Avid-y - which is good for Avid people, less interesting for the rest of us.

In other software news, Blackmagic Design was showing off some of what they're doing with the DaVinci technology that they acquired. Software-only Da Vinci Resolve for $999 is a pretty amazing deal, and the demos were quite nice. That said, color correction is an art, so just making the technology cheaper isn't necessarily going to dramatically change the number of folks who do it well - see Color.

Blackmagic also has a pile of new USB 3.0 hardware devices, including the absolutely gorgeous UltraStudio Pro. Makes me pine for USB 3.0 on the mac.

On the production side, we saw new cameras from just about everyone. To start at the high end, the Arri Alexa was absolutely stunning. Perhaps the nicest digital cinema footage I've seen. Not only that, but they've worked out a usable workflow, recording to ProRes plus RAW. At the price point they're promising, the world is going to get a lot more difficult for RED.

Sony's new XDCam EX gear is another good step forward for that format. Nothing groundbreaking, but another nice progression. I was kind of hoping we'd see 4:2:2 EX gear from them, but I suppose they need to justify the disc based formats for a while longer.

The Panasonic AG-AF100 is another interesting camera, bringing micro 4/3rds into video. The only strange thing is the recording side - AVCHD to SD cards. While I'm thrilled to see them using SD instead of P2, it sure would have been nice to have an AVCIntra option.

Finally, Canon's 4:2:2 XF cams are a nice option for the ENG/EFP market. Nothing groundbreaking, aside from the extra color sampling, but it's a nice step up from what they've been doing.

Speaking of Canon, it's interesting to see the ways that the 5d and 7d have made their way into mainstream filmmaking. At one point, I thought they'd be relegated to the indie community - folks looking for nice DoF on a budget. Instead, they seem to have been adopted by a huge range of productions, from episodic TV to features. While they're not right for everyone, the price and quality make them an easy choice in many cases.

One of the stars of the show for me was the GoPro, a small waterproof HD camera that ships with a variety of mounts, designed to be used in places where you couldn't or wouldn't use a more full featured camera. No LCD, just a record button and a wide angle lens. I bought two.

Those are the things that stand out for me. While there was plenty of interesting stuff to be seen, given the current economic conditions at the University, I wasn't exactly in a shopping mindset. The show definitely felt more optimistic than it did last year, and companies are again pushing out new products. However, attendances was about 20% lower than 2008, and that was definitely noticeable on the show floor.

Posted by at 10:48 AM | News

February 18, 2010

CaptionManager - easily add and remove captions from QT movies

Cough. Yeah. Remember this blog? Right then.

Here's a new little app to add and remove caption tracks (SCC files) from Quicktime files. In theory you can do this with Quicktime Pro, but it doesn't seem to work so well anymore.

This zip file includes the source for the app, Xcode project, and a compiled build.

Basically, you can open a quicktime movie, and it'll detect whether there are already captions or not. Then you can strip the captions if they already exist (plus an associated TC track) or add new captions from an SCC file. You'll either need to be on Snow Leopard or have the Caption Component installed. The built version is Intel only, though you could probably compile a PPC version if you were so inclined.

The app writes out a new file, rather than updating in place, due to some limitations in QTKit.

For the command line, running ./CaptionManager.app/Contents/MacOS/CaptionManager -help will give you the relevant info.

No license attached, because I still don't understand the implications of BSDing stuff created on the University's dime.

CaptionManager.zip

Oh also, the GUI leaks a little memory. Deal. I've also posted a screencast of the app.

Posted by at 10:24 AM | News

November 18, 2009

Sony Launches Less Useful Z5U

Sony today announced the NXCAM, an AVCHD-based "professional" camera which bears a striking resemblance to the EX1 and Z5U.

You get 1080p exmor CMOS chips (presumably 1/3"?) and records AVCHD to the highly popular (sarcasm) Memory Stick media.

Pricing hasn't been announced, but presumably it'll be in the $4000 range like the Z5U. I'll be curious to see how this shakes out in the market.

Posted by at 12:25 PM | News

October 26, 2009

ClipWrap 2.0 brings AVCHD support

Do you love AVCHD, but hate the long, disk consuming transcodes? Well, ClipWrap 2.0 is here, and it lets you turn your AVCHD mts files into Quicktime compatible mov files, with no transcoding, and no generation loss. Dig it.

(disclaimer: the author of ClipWrap is a friend)

Posted by at 2:59 PM | News

October 21, 2009

XDCam EX gets some friends

Sony has announced a couple new additions to the XDCamEX family - the PMW-350 and the PMW-EX1R.

The 350 is a shouldermount camera with interchangeable lenses and 2/3" chips. That puts it somewhere between the 1/2" PDW-F355 and the 2/3" 4:2:2 PDW-700.

Sony Pmw-350 Angle Med

The EX1R is a minor bump to the EX1, adding features that users have asked for, like a dedicated viewfinder and a DVCam recording mode.

For me, the most interesting bit of news is that Sony is launching the "MEAD-MS01," an SXS to MemoryStick adapter. I guess Sony noticed that many EX1 and EX3 users have been using SD adapters, and decided to get into that market. And of course, they had to use everyone's least favorite flash format, Memorystick. I'll stick to my SD cards for now, but it's nice to see Sony "legitimize" that recording option a bit.

Posted by at 3:10 PM | News