Category Archives: Uncategorized

Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).

There is a lot of confusion and misunderstanding about aliasing and moiré. So I’ve thrown together this article to try and explain whats going on and what you can (or can’t) do about it.

Sony’s FS700, F5 and F55 cameras all have 4K sensors. They also have the ability to shoot 4K raw as well as 2K raw when using Sony’s R5 raw recorder. The FS700 will also be able to shoot 2K raw to the Convergent Design Odyssey. At 4K these cameras have near zero aliasing, at 2K there is the risk of seeing noticeable amounts of aliasing.

One key concept to understand from the outset is that when you are working with raw the signal out of the camera comes more or less directly from the sensor. When shooting non raw then the output is derived from the full sensor plus a lot of extra very complex signal processing.

First of all lets look at what aliasing is and what causes it.

Aliasing shows up in images in different ways. One common effect is a rainbow of colours across a fine repeating pattern, this is called moiré. Another artefact could be lines and edges that are just a little off horizontal or vertical appearing to have stepped or jagged edges, sometimes referred to as “jaggies”.

But what causes this and why is there an issue at 2K but not at 4K with these cameras?

Lets imagine we are going to shoot a test pattern that looks like this:

Test pattern, checked shirt or other similar repeating pattern.
Test pattern, checked shirt or other similar repeating pattern.

And lets assume we are using a bayer sensor such as the one in the FS700, F5 or F55 that has a pixel arrangement like this, although it’s worth noting that aliasing can occur with any type of sensor pattern or even a 3 chip design:

Sensor with bayer pattern.
Sensor with bayer pattern.

Now lets see what happens if we shoot our test pattern so that the stripes of the pattern line up with the pixels on the sensor. The top of the graphic below represents the pattern or image being filmed, the middle is the sensor pixels and the bottom is the output from the green pixels of the sensor:

Test pattern aligned with the sensor pixels.
Test pattern aligned with the sensor pixels.

As we can see each green pixel “see’s” either a white line or a black line and so the output is a series of black and white lines. Everything looks just fine… or is it? What happens if we move the test pattern or move the camera just a little bit. What if the camera is just a tiny bit wobbly on the tripod? What if this isn’t a test pattern but a striped or checked shirt and the person we are shooting is moving around? In the image below the pattern we are filming has been shifted to the left by a small amount.

Test pattern miss-aligned with pixels.
Test pattern miss-aligned with pixels.

Now look at the output, it’s nothing but grey, the black and white pattern has gone. Why? Simply because the green pixels are now seeing half of a white bar and half of a black bar. Half white plus half black equals grey, so every pixel see’s grey. If we were to slowly pan the camera across this pattern then the output would alternate between black and white lines when the bars and pixels line up and grey when they don’t. This is aliasing at work. Imagine the shot is of a person with a checked shirt, as the person moves about the shirt will alternate between being patterned and being grey. As the shirt will be not be perfectly flat and even, different parts of the shirt will go in and out of sync with the pixels so some parts will be grey, some patterned, it will look blotchy. A similar thing will be happening with any colours, as the red and blue pixels will sometimes see the pattern at other times not, so the colours will flicker and produce strange patterns, this is the moiré that can look like a rainbow of colours.

So what can be done to stop this?

Well what’s done in the majority of professional level cameras is to add a special filter in front of the sensor called an optical low pass filter (OPLF). This filter works by significantly reducing the contrast of the image falling on the sensor at resolutions approaching the pixel resolution so that the scenario above cannot occur. Basically the image falling on the sensor is blurred a little so that the pixels can never see only black or only white. This way we won’t get flickering between black & white and then grey if there is any movement. The downside to this is that it does mean that some contrast and resolution will be lost, but this is better than having flickery jaggies and rainbow moiré. In effect the OLPF is a type of de-focusing filter (for the techies out there it is usually something called a birefringent filtre). The design of the OLPF is a trade off between how much aliasing is acceptable and how much resolution loss is acceptable. The OLPF cut-off isn’t instant, it’s a sharp but gradual cut-off that starts somewhere lower than the sensor resolution, so there is some undesirable but unavoidable contrast loss on fine details. The OLPF will be optimised for a specific pixel size and thus image resolution, but it’s a compromise. In a 4K camera the OLPF will start reducing the resolution/contrast before it gets to 4K.

(As an aside, this is one of the reasons why shooting with a 4K camera can result in better HD, because the OLPF in an HD camera cuts contrast as we approach HD, so the HD is never as sharp and contrasty as perhaps it could be. But shoot at 4K and down-convert and you can get sharper, higher contrast HD).

So that’s how we prevent aliasing, but what’s that got to do with 2K on the FS700, F5 and F55?

Well the problem is this, when shooting 2K raw or in the high speed raw modes Sony are reading out the sensor in a way that creates a larger “virtual” pixel. This almost certainly has to be done for the high speed modes to reduce the amount of data that needs to be transferred from the sensor and into the cameras processing and recording circuits when using high frame rates.  I don’t know exactly how Sony are doing this but it might be something like my sketch below:

Using adjacent pixels to create larger virtual pixels.
Using adjacent pixels to create larger virtual pixels.

So instead of reading individual pixels, Sony are probably reading groups of pixels together to create a 2K bayer image. This creates larger virtual pixels and in effect turns the sensor into a 2k sensor. It is probably done on the sensor during the read out process (possibly simply by addressing 4 pixels at the same time instead of just one) and this makes high speed continuous shooting possible without overheating or overload as there is far less data to read out.

But, now the standard OLPF which is designed around the small 4K isn’t really doing anything because in effect the new “virtual” pixels are now much larger than the original 4K pixels the OLPF was designed around. The standard OLPF cuts off at 4K so it isn’t having any effect at 2K, so a 2K resolution pattern can fall directly on our 2K virtual bayer pixels and you will get aliasing. (There’s a clue in the filter name: optical LOW PASS filter, so it will PASS any signals that are LOWer than the cut off. If the cut off is 4K, then 2K will be passed as this is lower than 4K, but as the sensor is now in effect a 2K sensor we now need a 2K cut off).

On the FS700 there isn’t (at the moment at least) a great deal you can do about this. But on the F5 and F55 cameras Sony have made the OLPF replaceable. By loosening one screw the 4K OLPF can be swapped out with a 2K OLPF in just a few seconds. The 2K OLPF will control aliasing at 2K and high speed and in addition it can be used if you want a softer look at 4K. The contrast/resolution reduction the filter introduces at 2K will give you a softer “creamier” look at 4K which might be nice for cosmetic, fashion, period drama or other similar shoots.

Replacing the OLPF on a Sony PMW-F5 or PMW-F55. Very simple.
Replacing the OLPF on a Sony PMW-F5 or PMW-F55. Very simple.

Fs700 owners wanting to shoot 2K raw will have to look at adding a little bit of diffusion to their lenses. Perhaps a low contrast filter will help or a net or stocking over the front of the lens to add some diffusion will work to slightly soften the image and prevent aliasing. Maybe someone will bring out an OLPF that can be fitted between the lens and camera body, but for the time being to prevent aliasing on the FS700 you need to soften, defocus or blur the image a little to prevent the camera resolving detail above 2K. Maybe using a soft lens will work or just very slightly de-focussing the image.

But why don’t I get aliasing when I shoot HD?

Well all these cameras use the full 4K sensor when shooting HD. All the pixels are used and read out as individual pixels. This 4K signal is then de-bayered into a conventional 4K video signal (NOT bayer). This 4K (non raw) video will not have any significant aliasing as the OLPF is 4K and the derived video is 4K. Then this conventional video signal is electronically down-converted to HD. During the down conversion process an electronic low pass filter is then used to prevent aliasing and moiré in the newly created HD video. You can’t do this with raw sensor data as raw is BEFORE processing and derived directly from the sensor pixels, but you can do this with conventional video as the HD is derived from a fully processed 4K video signal.

I hope you understand this. It’s not an easy concept to understand and even harder to explain. I’d love your comments, especially anything that can add clarity to my explanation.

UPDATE: It has been pointed out that it should be possible to take the 4K bayer and from that use an image processor to produce a 2K anti-aliased raw signal.

The problem is that, yes in theory you can take a 4K signal from a bayer sensor into an image processor and from that create an anti-aliased 2K bayer signal. But the processing power needed to do this is incredible as we are looking at taking 16 bit linear sensor data and converting that to new 16 bit linear data. That means using DSP that has a massive bit depth with a big enough overhead to handle 16 bit in and 16 bit out. So as a minimum an extremely fast 24 bit DSP or possibly a 32 bit DSP working with 4K data in real time. This would have a big heat and power penalty and I suspect is completely impractical in a compact video camera like the F5/F55. This is rack-mount  high power work station territory at the moment. Anyone that’s edited 4K raw will know how processor intensive it is trying to manipulate 16 bit 4K data.

When shooting HD your taking 4K 16 bit linear sensor data, but your only generating 10 bit HD log or gamma encoded data, so the processing overhead is tiny in comparison and a 16 bit DSP can quite happily handle this, in fact you could use a 14 bit DSP by discarding the two LSB’s as these would have little effect on the final 10 bit HD.

For the high speed modes I doubt there is any way to read out every pixel from sensor continously without something overheating. It’s probably not the sensor that would overheat but the DSP. The FS700 can actually do 120fps at 4K for 4 seconds, so clearly the sensor can be read in it’s entirety at 4K and 120fps, but my guess is that something gets too hot for extended shooting times.

So this means that somehow the data throughput has to be reduced. The easiest way to do this is to read adjacent pixel groups. This can be done during the readout by addressing multiple pixels together (binning). If done as I suggest in my article this adds a small degree of antialising. Sony have stated many times that they do not pixel or line skip and that ever pixel is used at 2K, but either way, whether you line skip or pixel bin the effective pixel pitch increases and so you must also change the OLPF to match.

A quick look at the Sony PXW-Z100 at IBC.

IBC is still in full swing and I’m very busy at the show, but I managed to spend a bit of time with the Z100 today. I was able to compare it with some of the other cameras on the camera set for comparison, but this is a very crude first look at a camera running beta firmware and picture settings. So it may not 100% represent the final product, but I do expect it to be pretty close.

Having played with the Z100 now, I have to say I am pleasantly impressed. It is not a sensitive as the PMW200 or an EX1, I estimate it’s about 1.5 stops less sensitive at 0db. But it is remarkably noise free with slightly less noise than a PMW-200 at 0db. Even at +9bd the (which brings it back up to similar sensitivity to a PMW-200/EX1 at 0db) the noise is not too bad. Fast pans at +9 or +12db will reveal some image smear due to the 3D noise reduction having to work harder, but it’s not too bad and usable for most applications.

I thought it would be worse than this. There must be a lot of noise reduction and processing taking place to produce this clean image, but overall the NR is very transparent and well executed. I estimate dynamic range at about 10 stops, maybe a little more but certainly nowhere near the 14 stops you can get from a camera like the F5 or FS700 in raw mode. .The PMW-300 on the Sony booth is showing more dynamic range than the Z100. I did expect this due to the small pixel size. The standard 709 gamma curve with knee works quite well. The Cinematone gammas don’t bring any more dynamic range as far as I can tell, but the highlight roll off is more pleasing and a little more natural looking with the Cinematone gammas.

My biggest reservation is focussing the camera with the built in viewfinder or LCD. The rear finder is really not up to the task of focussing for 4K. The LCD panel is better, but with no magnifier or monocular your going to have to have damn good close up eyesight to be able to use it for accurate focus at 4K. This is not an issue unique to this camera, as no camera I know of has a viewfinder better than 1080P and most are only 720P or 1/4HD (960×540 which is what I believe the Z100 is). But not having a magnifier makes this even worse than most. So, your almost certainly going to have to rely on autofocus to get the focus spot on in many situations. Fortunately the autofocus is fast and accurate. I think with these smaller cameras the use of autofocus will be common even for us old “I never use autofocus” operators, just as autofocus is now normal even for professional photographers. There is a good colored peaking function that works well and the deeper DoF from the small sensor does mean that focus errors are not quite as telling as on a large sensor camera. But even so the LCD, for me at least is far from ideal for good focus at 4K. I think your going to need to either add a 3rd part loupe or use and external finder such as the Alphatron with focus magnification.

Build quality is good, the camera feels very solid yet lightweight, even with a high capacity battery it is comfortably under 3kg. It uses the very common NP-F type batteries. Minor gripes are that the shoe on the handle in front of the LCD means that if you have a large light or radio mic attached to the shoe you can’t open and close the LCD panel.

The menu system is lifted straight from the PMW-F5 and F55 and most of the menu pages are very similar. Scene file settings are quite comprehensive and there is a lot of scope for fine tuning the pictures with matrix, detail and gamma settings. However as I said, no extended dynamic range Cinegammas or Hypergammas but you can adjust the knee and black gamma to fine tune your contrast range and dynamic range.

Overall, it’s better than I expected. The 4K images are sharp and clear, not overly sharpened and they look quite natural. At 0db the noise levels are very low and the image is quite clean, but sensitivity is lower than we expect from a modern HD camera (no big surprise). Dynamic range is also a little lower than you can get from a good 1/2″ camera but not significantly so. I think Sony have done a good job of squeezing as much as they can from this small size sensor with very small pixels. The 20x zoom seems to stay nice and sharp across the zoom range, even out in the corners. As an F5 owner there have been many occasions when I have longed for a sharp 20x zoom that I can use when shooting 4K. That’s probably something I’ll never be able to afford for my F5, but the Z100 opens up the possibility of having that wide zoom range and 4K. Providing the scene isn’t too dark or too contrasty the Z100 would allow me to get those shot for a lot, lot, lot less money than very big, very heavy PL mount zoom.

Sample 4K footage from the FS700 with IFR5/R5 raw recorder.

Below is a little compilation of 4K clips shot with the FS700. Shot over a couple of very nice summer days the shots really show the incredible dynamic range available when you shoot linear raw. Linear raw, also know as “linear light” or “scene referred” captures the light coming from the scene exactly as it is. That is without any gamma. Gamma mimics the way out own visual system works and is used to save data by compressing highlights. But the light in the real world is not actually like this, it’s linear (it’s just we don’t perceive it that way). Linear raw provides amazing contrast in both the shadows and highlights, so grades beautifully.

One issue however is that when your capturing such a big dynamic range (some of the shots have in excess of 12 stops) but showing it on conventional monitors that can only show at best 10 stops (about 8 stops for LCD, 10 for OLED) the image will look flat if you try to keep all of the original range. So sometimes, even though it might not actually be over exposed, you still need to allow your highlights to blow out for the pictur to look natural or real. I find grading shots with big dynamic ranges quite a challenge. The shots on the river with brilliant white boats under dark trees were tough to get right, but you can really see the dynamic range where I’ve been able to pull meaningful picture information out of the deepest shadows while still keeping clouds in the bright blue sky.

The 12 bit raw from the FS700 is remarkably good. It’s so close to the 16 bit raw from the F5 and F55 that frankly I don’t think that even the pixels peepers amongst us would be able to tell the difference in the end result. I do feel that I have to be just a little bit more accurate with my exposure and grading with the FS700, but maybe that’s just psychological? The FS700 for some reason is a bit noisier than my F5, especially in the colours, but it’s not something that concerns me, it’s still a nice clean image.

FS700 with IFR5 and R5
FS700 with IFR5 and R5

Ergonomically the FS700 with IFR5/R5 for location or run and gun is a bit of a disaster which is a great shame. Mount the recorder on some extended rail contraption and you have a very long and very heavy camcorder and you can’t use the Sony viewfinder because the recorder is in the way. For my shoots I used an Alphatron EVF.

Exposure with the FS700 and raw is pretty straight forward. You can use S-Log2 in a picture profile to allow you to view the cameras full range, but the image will be very flat. Or you can use any of the other gammas (or a full blown picture profile) as a kind of fake Look Up Table to approximate various final looks. This all come out on the HDMI out and has no effect on the raw recording. At the same time you also get an additional HDSDI output on the AUX out of the R5, this is always S-Log2.

So here’s the clip. I’ll write more about working with raw and the FS700 after IBC when I’ve been able to get the latest news on the Convergent Design Odyssey, the other way to record 2K raw and 4K compressed from the FS700.

Select “Original” under the quality setting to see the clip in 4K.

Sample 4K and 2K raw video clips from the FS700 and Convergent Design Odyssey.

O7Q_SonyRaw_hdI spent quite a bit of time last week shooting in 4K with a FS700 using Sony’s IFR5 adapter and R5 recorder. I have to say that the pictures are really amazing. The dynamic range is incredible and the resolution and clarity beautiful. I’ll be posting a clip in the next few days and you’ll be able to see the footage at IBC.

FS700 with IFR5 and R5
FS700 with IFR5 and R5

But, the ergonomics are terrible. Attach the IFR5 and R5 to the back of an FS700 with an extended rail system and you have one heck of a long and heavy camcorder. For studio or drama shoots this may well be fine, in fact you may end up using the recorder as a completely separate device in the video village. But for documentary or run and gun the camera becomes a real monster.

The alternative to the IFR5/R5 is the Convergent Design Odyssey. The Odyssey can’t do 4K raw but it can do very high quality 4K compressed as well as 2K raw and high speed. For many 4K compressed will be far more manageable than 4K raw. It’s only the size of a small monitor (in fact it IS a monitor, a very nice OLED monitor) so far, far easier to use with the FS700. Convergent Design have just posted some sample clips on their website, so if you want an idea of just how good this combo will be follow the link below.

http://www.convergent-design.com/Products/Odyssey7Q/Sony.aspx

4 New Cameras From Sony! ActionCam, Music Cam and 2 new 4K cams.

It’s official and I can talk about them now!

Sony are bringing four new cameras to the market from their pro, consumer and semi-pro department. These cameras straddle the market and will find a place in the hands of both home shooters and professionals.

New Sony HXR-AS30 ActionCam in the new lightweight housing.
New Sony HDR-AS30 ActionCam in the new lightweight housing.

Starting with the smallest, this one will look very familiar to many of you. It’s a new version of the Gopro like ActionCam. The new model is the HDR-AS30. Not hugely different from the previous model it offers HD recording at up to 120 fps in 720p and 60p at 1920×1080 and WiFi connection for remote control and monitoring. The great news for us here in Europe and other PAL regions is that the new model now includes 25 and 50fps frame rates. Add in electronic image stabilisation as well as the very sensitive EXMOR-R sensor and this really is a great alternative to the GoPro. As well as the improved frame rates the AS30 now comes with a much lighter housing. The original AS10/AS15 housing was built for deep water diving and as a result was quite bulky and heavy.

The new menu buttons on the ActionCam housing.
The new menu buttons on the ActionCam housing.

The new housing is very similar to the old but of thinner plastic so it’s much lighter and less bulky. However the slim housing is only suitable for use in shallow water or to withstand the occasional dunking that it would get on say a surfboard or windsurfer. Another new feature is that the housing now incorporates buttons that allow you to change the camera settings without having to remove it from the housing.

Sony are well aware that what really matters with these mini cams is mounting flexibility. So along with the camera Sony are extending the range mounts, brackets and adapters available. They even have a clever device that turns the camera into a small handheld camcorder with flip out screen. Another add-on coming soon is a wireless wrist strap monitor and remote. Oh, and one more thing. Just in case you forget where you took your pictures the camera now has a GPS receiver built in that tags your videos with the shooting location.

Sony MV1 Music Video camera with stereo microphones.
Sony MV1 Music Video camera with stereo microphones.

Next up is a new type of camcorder for Sony…. or is it an audio recorder with a built in camera? When I was first shown the HDR-MV1 I really didn’t know what to make of it. It is referred to as the Music Video Camera by Sony. The concept is for a camera that can shoot good video in low light along with excellent quality stereo audio for bands and musicians to shoot simple YouTube videos etc. The camera is certainly very capable of doing exactly that, but there is also a lot more that this camera can be used for. Not much bigger than an electric shaver and sporting a pair of stereo microphones with 120 degree separation this camera is so easy to use for capturing stunning quality sound with reasonable HD pictures. It is one of those gadgets that will find it’s way into many camera crews kit bags. I’ve been playing with one and it’s great. For example, when shooting some steam trains I was able to just place the MV1 on a bridge parapet or beside the track to capture wonderful stereo sound of the trains puffing past. OK, I’ll have to sync the sound up with the main video in post, but as the camera shoots pictures too that’s pretty straight forward. To have done this conventionally would have required a good stereo mic, a stand, cables or radio links etc. Costing less than most decent stereo microphones it’s so simple and convenient that I’ll be looking to get one as soon as they are released. Click here to download a sample audio clip from the MV1.  mv1-audio-sample

Finally we have two new 4K camcorders. The Sony FDR-AX1 and PXW-Z100. Starting with the AX1 (on which the Z100 is based), this is a compact handheld camcorder that has a Sony G series 20x zoom lens with a single 8.3 Megapixel back illuminated EXMOR-R  1/1.23″ sensor (that’s just a little smaller than 1/2″). The sensor allows for 4K shooting at up to 60fps. Interestingly for a consumer camera this one uses a variation of Sony’s new XAVC codec from the pro line of cameras to record the 4K footage. XAVC-S records 4K at 150Mb/s and HD at around 50Mb/s (compared to 220+ and 100+ Mb/s at 25fps for regular XAVC). This is a Long GoP version of the XAVC codec and is limited QFHD or (UHDTV) at 3840 x 2160 along with 8 bit 4:2:0 encoding . As this involves some quite high bit rates so the camera used XQD cards for recording. There are 2 slots for the XQD cards. Another first for a consumer camcorder is a pair of XLR audio connectors, clearly this camera is aimed at the high end of the consumer market. The camera has an HDMI output that will output 8 bit 4:2:0 4K for connection to a consumer 4K TV.

Sony PXW-Z100 4K camcorder
Sony PXW-Z100 4K camcorder

Taking the AX1 up a notch is the Z100. Many of the specs are the same, but the recording codec on the Z100 is the same XAVC I frame codec as used on Sony’s F5 and F55 cameras. This allows the Z100 to record the full 4K 17:9 4096 x 2160 sensor output at 10 bit 4:2:2. The down side to this is the data rates are now much higher at 232Mb/s for HD and up to 600Mb/s for 4K (at 60fps). This is a lot of data to manage and I can’t help but think that for many the QFHD and long GoP codec of the AX1 might be a better option (rumour is that there will be a firmware update for the Z100 that will allow it to recording using XAVC-S later in the year). In a later firmware update there will also be the option to record AVCHD to an SD card alongside the XAVC recordings. Other outputs include composite AV outputs on phono jacks as well as timecode out (also phono).

Both the AX1 and Z100 use Sony’s NP-F type batteries, so no expensive batteries need here!

As well as HDMI the Z100 has a 3G HDSDI output which can output a HD 60fps signal or a downscaled HD image when shooting in 4K.

Top view of the Sony Z100 4K camcorder.
Top view of the Sony Z100 4K camcorder.

The Z100 (and AX1 I believe) use the same paint and scene file settings as the PMW-F5 and F55 so it should be quite straightforward to transfer picture settings between the various cameras.

So just how will a small sensor 4K camera perform? Well the pixels will be very small so the camera won’t be as sensitive or have the dynamic range of the many large sensor 4K cameras on the market right now. As this is an EXMOR-R sensor it will be good for it’s size, but don’t expect it to be a great performer in low light. Other issues will be resolution and diffraction. When you have very small pixels and high resolution you run into an optical effect where the light passing through a small aperture gets bent and de-focussed. This limits the cameras useable aperture range. I think your going to be limited to keeping the iris more open than f8 to get the best results from this camera. Fortunately both cameras have a 4 position ND filter system that will help keep the aperture within the best range.

 

IBC just around the corner.

So, IBC is just a few weeks away. I’m busy reviewing new products that I can’t talk about yet and I’ll be taking much more in depth looks at the new Sony PMW-300 and PMW-400. This week I’m doing some 4K and 2K test shots with my FS700 an IFR5 and R5 (it looks very, very good). Sony will once again have the ICE Bar where you can come and ask questions about Sony’s products along with 3rd party accessories, workflows or settings. I’ll be attending the ICE bar for much of the show. So watch this space. I’ll be able to talk a little about a couple of the new products a little before IBC.

PMW-F5 and F55 firmware version 1.22 released. Bug Fix for HDCAM SR.

Sony have just released an interim software update for the PMW-F5 and F55. This update fixes a bug which affects the playback of HDCAM SStP files. The essence recorded by v1.2 and v1.22 in any HDCAM files is OK but there is a small issue with the files that means they may not play back as expected.

Here are the links to the new firmware:

PMW-F55_V121_1.22_10_2013-08-23_00-36-28_firmware.zip ?60047 KB

PMW-F5_V121_1.22_11_2013-08-23_00-46-28_firmware.zip ?60057 KB

PMW-F55_F5_VersionUp_Guide.zip ?428 KB

If updating from version 1.15 or earlier it is VITAL that you perform an “all reset” immediately after the update.

ACES – What’s it all about, do I need to worry about it?

You may have heard the term “ACES”  in presentations or workflow discussions for a while now. You may know that it is the Academy Color Encoding System and is a workflow for post producing high end material, but what does it mean in simple terms?

This isn’t a guide on how to use or work with ACES, it’s hopefully an easy to understand explanation of the basics of what it does, why it does it and what advantages it brings.

One of the biggest problems in the world of video and cinema production today is the huge number of different standards in use for acquisition and viewing. There are different color spaces, different gamma curves, different encoding standards, different camera setups and different output requirements. All in all it’s a bit of a confusing mess. ACES aims to mitigate many of these issues at the same time as increasing the image quality beyond that which is normally allowed with existing workflows.

There are 3 different conversion processes within the ACES workflow, plus the actual post production and grading process. These conversions are called IDT, RRT and ODT. It all sounds very confusing but actually when you break it down it’s fairly straight forward, on paper at least!

The first stage is the IDT or Input Device Transformation. This process takes the footage from your camera and converts it to the ACES standard. The IDT must be matched specifically to the camera and codec you are using, you can’t use a Red IDT for a Sony F55 or Arri Alexa. You must use exactly the right IDT.  Using the IDT (a bit like a look up table) you convert your footage to an ACES OpenEXR file. OpenEXR is the file format, like .mov or DPX etc.

Unlike most conventional video cameras the ACES files do not have any gamma or other similar curves to mimic the way our eyesight or film responds to light. ACES is a linear format. The idea is to record and store the light coming from the scene as accurately as technically possible. This is referred to as “Scene Referenced” as you are capturing the light as it comes from the scene, not as you would show it on a monitor to make it look visually pleasing. Most traditional video systems are said to be “Display Referenced” as the are based on what looks nice on a monitor or cinema screen. This normally involves gamma compression which reduces the range of information captured in the highlights. We don’t want this if we are to maximise our grading and post production possibilities so ACES is Scene Referenced and this means a linear response to match the actual physical behaviour of light which is very different to the way we see light or film responds to light. That linear response means lots and lots of data in highlights and as a result large file sizes, there is no limit to the dynamic range ACES can handle. The other thing ACES has is an unrestricted color space. Most traditional systems (including film) have narrow or restricted color space in order to save space for transmission or distribution. If a TV screen can only show a certain range of colours, why capture more than this – this is “Display Referencing”. But ACES is designed to be able to store the full spectrum of the original scene, it is “Scene Referenced”.

In addition  by carefully matching the IDT to the camera, after converting to ACES all your source material should look the same, even if it was shot by different cameras. Now there will be differences due to differing dynamic ranges, colour accuracy, and noise etc, but the ACES material should be as close as technically possible to the original physical scene so a grade applied to one camera make or model should also work for a different camera model in exactly the same way.

Now this big color space may well currently be impossible to capture and display, but by deliberately not restricting the color space ACES has the ability to grow and output files using any existing color space.

So… using the IDT we have now converted our footage to ACES linear saving it as an OpenEXR file. Or as in the case of some grading packages like resolve we have told it to convert our material into ACES as part of the grading process. But how do we view it? ACES Linear looks all wrong on conventional monitors, we now need a way to convert back from ACES to conventional video so we can see what our finished production will look like. Well there are two stages to this. The first is called the RRT, the second the ODT, sometimes these are combined into a single process.

The RRT or Reference Rendering Transform is designed to convert the  ACES linear to an ultra high quality but slightly less complicated standardised intermediate reference format. From this standardised format you can then apply the ODT or Output Device Transformation to convert from that common RRT intermediate to whatever output standard you need. In practice no-one sees or works with the RRT, it is just there as a fixed starting point for the ODT and in most cases the RRT and ODT operations are combined into a single process a bit like adding a viewing LUT. The RRT transformation is incredibly complex while the ODT is a much simpler process. By doing the difficult maths with one single RRT and then keeping the ODT’s simpler it’s easier to create a large range of ODT’s for specific applications. So from one grading pass you can produce masters for broadcast TV, the web or Cinema DCP just by changing the ODT part of the calculations used for the final output.

If your using a conventional HD monitor then you will need to use an ODT for Rec-709 so that the ACES material get converted to Rec-709 for accurate monitoring. It should be noted now though that as you are monitoring in the restricted Rec-709 color space and gamma range that you are not seeing the full range of the ACES footage or RRT intermediate.

So, it all sounds very complicated. In practice what you have to do is convert your footage using the right IDT to an ACES OpenEXR file (or tell the grading application to convert to ACES on the fly). You set up your grading workspace to use ACES and then set your output RRT/ODT to output using the standard you are viewing with (typically Rec-709) and do your grading as you would normally. One limitation of ACES is that due to the large color space many conventional LookUp Tables won’t work as expected within the ACES environment. They are simply too small. You need at least a 64x64x64 LUT which is massive. At the end of the grade you then choose the ODT for your master render, this might be 709 for TV or sRGB for the web and render you master. If your taking your project for finishing elsewhere then you can output your files without the RRT/ODT as ACES OpenEXR.

The advantages of ACES are: Standardised workflow with standardised files, so any ACES OpenEXR file from any camera will look and behave just like an ACES OpenEXR file from any other camera (or at least as closely as technically possible).

Unlimited dynamic range and color space, so no matter what your final output you are getting the very best possible image. Of course limited by the capture capabilities of the camera or film stock, but the workflow and recording format itself is not a limiting factor.

Fast output to multiple standards by doing the difficult maths using a common high quality RRT (Reference Render transform) followed by a simpler ODT specific to the format required. Very often these two functions are combined into a single ODT process.

So is ACES for you? Maybe it is, maybe not. If you use a lot of LUT’s in your grade then perhaps ACES is not going to work for you. If your camera already shoots linear raw then your already a long way towards ACES anyway so you may not see any benefit from the extra stages. However if your shooting with different cameras and there are IDT’s available for all the cameras your using then ACES should help make everything consistent and easier to manage. ACES OpenEXR files will be large compared to conventional video, so that needs to be taken into account.

Important Firmware Update for PMW-F5 and F55. V1.21.

Sony have released an interim firmware update for the F5 and F55. This release is a maintenance release to address a number of bugs and stability issues reported by end users. As such it is probably a good idea for users upgrade to this release as soon as possible. In particular it addresses some of the issues cause by updating the camera to V1.2 and then not performing an “all reset” immediately after the update.

PMW-F5_V121_1.21_9_2013-08-02_06-45-47_firmware.zip ?60062 KB

PMW-F55_V121_1.21_8_2013-08-02_06-35-47_firmware.zip ?60049 KB

Release note_F55_F5_V1_21.pdf ?67 KB

Please make sure you read the release notes and if at all unsure perform an “all reset” immediately after updating your firmware.