DaVinci Resolve 10 released. Lite includes UHD resolution.

It’s been in Beta for a while now, but now the release versions of Black Magic Designs colour grading tool is available to download in both the full paid and free lite versions. The Lite version now allows you to export at up to UHD resolution, so even those shooting 4K are going to be able to deliver at better than HD resolution. One of the best new features in Resolve 10 is a more refined and comprehensive set of editing tools including a title generator. For full details take a look a the BlackMagicDesign web site.

It’s all in the grade!

So I spent much of last week shooting a short film and commercial (more about the shoot in a separate article). It was shot in raw using my Sony F5, but could have been shot with a wide range of cameras. The “look” for this production is very specific. Much of it set late in the day or in the evening and requiring a gentle romantic look.

In the past much of this look would have been created in camera. Shooting with a soft filter for the romantic look, shifting the white balance with warming cards or a dialled in white balance for a warm golden hour evening look. Perhaps a custom picture profile or scene file to alter the look of the image coming from the camera. These methods are still very valid, but thanks to better recording codecs and lower cost grading and post production tools, these days it’s often easier to create the look in post production.

When you look around on YouTube or Vimeo at most of the showreels and demo reels from people like me they will almost always have been graded. Grading is a huge, modern,  part of the finishing process and it makes a huge difference to the final look of a production. So don’t automatically assume everything you see on-line looked like that when it was shot. It probably didn’t and a very, very big part of the look tends to be created in post these days.

One further way to work is to go half way to your finished look in camera and then finish off the look in post. For some productions this is a valid approach, but it comes with some risks and there are some things that once burnt into the recording can be hard to then subsequently change in post, for example any in camera sharpening is difficult to remove in post as are crushed blacks or skewed or offset white balance.

Also understand that there is a big difference between trying to grade using the color correction tools in an edit suite and using a dedicated editing package. For many, many years I used to grade using my editing software, simply because that was what I had. Plug-ins such as Magic Bullet looks are great and offer a quick and effective way to get a range of looks, but while you can do a lot with a typical edit color corrector, it pales into insignificance compared to what can be done with a dedicated grading tool, for example not only creating a look but then adjusting individual elements of the image.

When it comes to grading tools then DaVinci Resolve is probably the one that most people have heard of. Resolve Lite is free, yet still incredibly capable (provided you have a computer that will run it). There are lots of other options too like Adobe Speed Grade, but the key thing is that if you change your workflow to include lots of grading, then you need to change the way you shoot too. If you have never used a proper grading tool then I urge you to learn how to use one. As processing power improves and these tools become more and more powerful they will play an ever greater role in video production.

So how should you shoot for a production that will be graded? I’m sure you will have come across the term “shoot flat” and this is often said to be the way you should shoot when you’re going to grade. Well, yes and no. It depends on the camera you are using, the codec, noise levels and many other factors. If you are the DP, Cinematographer or DiT, then it’s your job to know how footage from your camera will behave in post production so that you can provide the best possible blank canvas for the colourist.

What is shooting flat exactly? Lets say your monitor is a typical LCD monitor. It will be able to show 6 or 7 stops of dynamic range. Black at stop 0 will appear to be black and whites at stop 7 will appear bright white. If your camera has a 7 stop range then the blacks and whites from the camera will be mapped 1:1 with the monitor and the picture will have normal contrast. But what happens when you then have a camera that can capture double that range, say 12 to 14 stops?. The bright whites captured by the camera will be significantly brighter than before. If you then take that image and try to show it on the same LCD monitor you have an issue because the LCD cannot go any brighter than before, so the much brighter whites from the high dynamic range shot are shown at the same brightness as the original low dynamic range shot. Not only that but the now larger tonal range is now squashed together into the monitors limited range. This reduces the contrast in the viewed image and as a result it looks flat.

That’s a real “shoot flat” image (a wide dynamic range shown on a typical dynamic range monitor), but you have to be careful because you can also create a flat looking image by raising the cameras black level or black gamma or reducing the white level. Doing this reduces the contrast in the shadows and mid tones and will make the pictures look low contrast and flat. But raising the black level or black gamma or reducing the white point rarely increases the dynamic range of a camera, most cameras dynamic range is limited by the way they handle highlights and over exposure, not shadows, dark or white level. So just beware, not all flat looking images bring real post production advantages, I’ve seen many examples of special “flat” picture profiles or scene files that don’t actually add anything to the captured image, it’s all about dynamic range, not contrast range. See this article for more in depth info on shooting flat.

If you’re shooting for grading, shooting flat with a camera with a genuinely large dynamic range is often beneficial as you provide the colourist with a broader dynamic range image that he/she/you can then manipulate so that it looks good on typically small dynamic range TV’s and monitors, but excessively raising the black level or black gamma rarely helps the colourist as this just introduces an area that will need to be corrected to restore good contrast rather than adding anything new or useful to the image. You also need to consider that it’s all very well shooting with a camera that can capture a massive dynamic range, but as there is no way to ever show that full range, compromises must be made in the grade so that the picture looks nice. An example of this would be a very bright sky. In order to show the clouds in the sky the rest of the scene may need to be darkened as the sky is always brighter than everything else in the real world. This might mean the mid tones have to be rather dark in order to preserve the sky. The other option would be to blow the sky out in the grade to get a brighter mid range. Either way, we don’t have a way of showing the 14 stop range available from cameras like the F5/F55 with current display technologies, so a compromise has to be made in post and this should be in the back of your mind when shooting scenes with large dynamic ranges. With a low dynamic range camera, you the camera operator would choose whether to let the highlights over expose to preserve the mid range or whether to protect the highlights and put up with a darker mid range. But now with these high dynamic range cameras that decision is largely moved to post production, but you should still be looking at your mid tones and if needed adding a bit of extra illumination so that the mids are not fighting the highlights.

In addition to shooting flat there is a lot of talk about using log gamma curves, S-Log, S-log2, LogC etc. Again IF the camera and recording codec are optimised for Log then this can be an extremely good approach. Remember that if you choose to use a log gamma curve then you will also need to adjust the way you expose to place skin tones etc in the correct part of the log curve. It’s no longer about exposing for what looks good on the monitor or in the viewfinder, but about exposing the appropriate shades in the correct part of the log curve.  I’ve written many articles on this so I’m not going to go into it here, other than to say log is not a magic fix for great results and log needs a 10 bit codec if your going to use it properly. See these articles on Log: S-Log and 8 bit  or Correct Exposure with Log. Using Log does allow you to capture the cameras full range, it will give you a flat looking image and when used correctly it will give the colourist a large blank canvas to play with. When using log it is vital that you use a proper grading tool that will apply log based corrections to your footage as adding linear corrections in a typical edit application to log footage will not give the best results.

So what if your camera doesn’t have log? What can you do to help improve the way the image looks after post production? First of all get your exposure right. Don’t over expose. Anything that clips can not be recovered in post. Something that’s a little too dark can easily be brightened a bit, but if it’s clipped it’s gone for good. So watch those highlights. Don’t under expose,  just expose correctly. If you’re having a problem with a bright sky don’t be tempted to add a strong graduated filter to the camera to darken the sky. If the colorist tries to adjust the contrast of the image the grad may become more extreme and objectionable. It’s better to use a reflector or some lights to raise the foreground rather than a graduated filter to lower the highlight.

One thing that can cause grading problems is any knee compression. Most video cameras by default use something called the “Knee” to compress highlights. This does give the camera the ability to capture a greater dynamic range, but this is done by aggressively compressing together the highlights and it’s either on or off. If the light changes during the shot and the cameras knee is set to auto (as most are by default) then the highlight compression will change mid shot and this can be a nightmare to grade. So instead of using the cameras default knee settings use a scene file or picture profile to set the knee to manual or use an extended range gamma curve like a Hypergamma or Cinegamma that does not have a knee and instead uses a progressive type of highlight compression.

Another thing that can become an issue in the grading suite is image sharpening. In camera sharpening such as detail correction works by boosting contrast around edges. So if you take an already sharpened image into the grading suite and then boost the contrast in post, the sharpening will become more visible and the pictures may take on more of a video look or become over sharpened. It’s just about impossible to remove image sharpening in post, but to add a bit of sharpening is quite easy. So, if you’re shooting for post consider either turning off the detail correction circuits all together or at the very least reduce the levels applied by a decent amount.

Color and white balance: One thing that helps keep things simple in the grade is having a consistent image. The last thing you want is the white balance changing half way through the shot, so as a minimum use a fixed white balance or preset white balance. I find it better to shoot with preset white when shooting for a post heavy workflow as even if the light changes a little from scene to scene or shot to shot the RGB gain levels remain the same so any corrections applied have a similar effect, the colourist then just tweaks the shots for any white balance differences. It’s also normally easier to swing the white balance in post if preset is used as there won’t be any odd shifts added as can sometimes happen if you have used a grey/white card to white balance.

Just as the brightness or luma of an image can clip if over exposed then so too can the colour. If you’re shooting colourful scenes, especially shows or events with coloured lights then it will help you if you reduce the saturation of the colour matrix by around 20%, this allows you to record stronger colours before they clip. Colour can then be added back in in the grade if needed.

Noise and grain: This is very important. The one thing above all the others that will limit how far you can push your image in post is noise and grain. There are two sources of this, camera noise and compression noise. Camera noise is dependant on the cameras gain and chosen gamma curve. Aways strive to use as little gain as possible, remember that if the image is just a little dark you can always add gain in post, so don’t go adding un-necessary gain in camera. A proper grading suite will have powerful noise reduction tools and these normally work best if the original footage is noise free and then gain added in post, rather than trying to de-noise grainy camera clips.

The other source of noise and grain is compression noise. Generally speaking, the more highly compressed the video stream is then the greater the noise will be. Compression noise is often more problematic than camera noise as in many cases it will have a regular pattern or structure which makes it visually more distracting than random camera noise. More often than not the banding seen in images across the sky or flat surfaces is caused by compression artefacts rather than anything else and during grading any artefacts such as these can become more visible. So try to use as little compression as possible, this may mean using an external recorder but these can be purchased or hired quite cheaply these days. As always, before a big production test your workflow. Shoot some sample footage, grade it and see what it looks like. If you have a banding problem, suspect the codec or compression ratio first, not whether it’s 8 bit or 10 bit, in practice it’s not 8 bit that causes banding, but too much or poor quality compression (so even a camera with only an 8 bit output like the FS700 will benefit from recording on a better quality external recorder).

RAW: Of course the best way of providing the colourist (even if that’s yourself) the best blank canvas is to shoot with a camera that can record the raw sensor data. By shooting raw you do not add any in camera sharpening or gamma curves that may then need to be removed in post. In addition raw normally means capturing the cameras full dynamic range. But that’s not possible for everyone and generally involves working with very large amounts of data. If you follow my guidelines above you should at least have material that will allow you a good range of adjustment and fine tuning in post. This isn’t “fix it in post”, we are not putting right something that is wrong. We are shooting in a way that allows us to make use of the incredible processing power available in a modern computer to produce great looking images. You are making those last adjustments that make a picture look great using a nice big monitor (hopefully calibrated) in a normally more relaxed environment than on most shoots.

The way videos are produced is changing. Heavy duty grading used to be reserved for high end productions, drama and movies. But now it is common place, faster and easier than ever. Of course there are still many applications where there isn’t the time for grading, such as TV news, but grading is going to play an every greater part in more and more productions, so it’s worth learning how to do it properly and how to adjust your shooting setup and style to maximise the quality of the finished production.

Workshops and events in November/December.

Here’s a list of workshops and events that I’m involved in in the coming months:

Bucharest, Romainia, O-Video are opening a new centre and will be launching this with open days including seminars on the FS700, F5 and F55 on the 5/6/7th of November.
Advanced Media Dubai, FS700 and hopefully Convergent Design Odyssey 7Q 22/23rd November.

Tallin Estonia, FS700 workshop, 28th November, location TBA.

New York, F5/F55 2 hour evening seminar, Sony Cine Alta Forum , NYC 4th December.
Beginner, Intermediate and Advanced video production skills workshops at Omega Broadcast, Austin Texas, 7,9,10th December. great opportunity to come and improve your video skills whatever you experience levels. This was a fantastic event last year and this year should be even better. Fun, educational, inspirational.

Anyone else in USA looking for training early December? I’ll be in the US so if you or your company would like me to put on some training or an event please let me know asap.

Understanding the difference between Display Referenced and Scene Referenced.

This is really useful! Understand this and it will help you understand a lot more about gamma curves, log curves and raw. Even if you don’t shoot raw, understanding this can be very helpful in working out differences in how we see the world, the way the world really is and how a video camera see’s the world.

So first of all what is “Display Referenced”? As the name of the term implies this is all about how an image is displayed. The vast majority of gamma curves are display referenced. Most cameras are setup based on what the pictures look like on a monitor or TV, this is display referenced. It’s all about producing a picture that looks nice when it is displayed. Most cameras and monitors produce pictures that look nice by mimicking the way or own visual system works, that’s why the pictures look good.

Kodak Grey Card Plus.
Kodak Grey Card Plus.

If you’ve never used a grey card it really is worth getting one as well as a black and white card. One of the most commonly available grey cards is the Kodak 18% grey card. Look at the image of the Kodak Grey Card Plus shown here. You can see a white bar at the top, a grey middle and a black bar at the bottom.

What do you see? If your monitor is correctly calibrated the grey patch should look like it’s half way between white and black. But this “middle” grey is also known as 18% grey because it only actually reflects 18% of the light falling on it. A white card will reflect 90% of the light falling on it. If we assume black is black then you would think that a card reflecting only 18% of the light falling on it would look closer to black than white, but it doesn’t, it looks half way between the two. This is because our own visual system is tuned to shadows and the mid range and tends to ignore highlights and brighter parts of the scenes we are looking at. As a result we perceive shadows and dark objects as brighter than they actually are. Maybe this is because in the past the things that used to want to eat us lurked in the shadows, or simply because faces are more important to us than the sky and clouds.

To compensate for this, right now your monitor is only using 18% of it’s brightness range to show shades and hues that appear to be half way between black and white. This is part of the gamma process that makes images on screens look natural and this is “display referenced”

When we expose a video camera using a display referenced gamma curve (Rec-709 is display referenced) and a grey card, we would normally set the exposure level of the grey card at around 40-45%. It’s not normally 50% because a white card will reflect 90% of the light falling on it and half way between black and the white card will be about 45%.

We do this for a couple of reasons. In older analog recording and broadcasting systems the signal is nosier when closer to black, if we recorded 18% grey at 18% it would be possibly be very noisy. Most scenes contain lots of shadows and objects less bright than white, so recording these at a higher level provides a less noisy picture and allows us to use more bandwidth for those all important shadow areas. When the recording is then displayed on a TV or monitor the levels are then adjusted by the monitors gamma curve so that the brightness levels are such that mid-tones appear as just that, mid tones.

So that middle grey recorded at 45% is getting reduced back down so that the display outputs 18% of its available brightness range and thus to us humans it appears to be half way between black and white.

So are you still with me? All the above is “Display Referenced”, it’s all about how it looks.

So what is “Scene Referenced”?

Think about our middle grey grey card again. It reflects only 18% of the light that falls on it, yet appears to be half way between black and white. How do we know this? Well because someone has used a light meter to measure it. A light meter is a device that captures photons of light and from that produces an electrical signal to drive a meter. What is a video camera? Every pixel in a video camera is a microscopic light meter that turns electrons of light into and electrical signal. So a video camera is in effect a very sophisticated light meter.

Un graded raw shot of a bike in Singapore, this is scene referred as it shows the scene as it actually is.
Un graded raw shot of a bike in Singapore, this is scene referred as it shows the scene as it actually is.

If we remove the cameras gamma curve and just record the data coming off the sensor we are recording a measurement of the true light coming from the scene just as it is. Sony’s F5, F55 and F65 cameras record the raw sensor data with no gamma curve, this is linear raw data, so it’s a true representation of the actual light levels in the scene. This is “Scene Referred”. It’s not about how the picture looks, but recording the actual light levels in the scene. So a camera shooting “Scene Referred” will record the light coming off an 18% grey card at 18%.

If we do nothing else to that scene referred image and then show it on a monitor with a conventional gamma curve, that 18% grey level would be taken down in level by the gamma curve and as a result look almost totally black (remember in Display referenced we record middle grey at 45% and then the gamma curve corrects the monitor output down to provide correct brightness so that we perceive it to be half way between black and white).

This means that we cannot simply take a scene referenced shot and show it on a display referenced monitor. To get from Scene Referenced to Display Referenced we have to add a gamma curve to the Scene Referenced footage. When your working with linear raw this is normally done on the fly in the editing or grading software, so it’s very rare to actually see the scene referenced footage as it really is. The big advantage of using scene referenced material is that because we have recorded the scene as it actually is, any grading we do will not have to deal with the distortions that a gamma curve adds. Grading correction behave in a much more natural and realistic manner. The down side is that as we don’t have a gamma curve to help shift our recording levels into a more manageable range we need to use a lot more data to record the scene accurately.

The Academy ACES workflow is based around using scene referenced material rather than display referenced. One of the ideas behind this is that scene referenced cameras from different manufacturers should all look the same. There is no artistic interpretation of the scene via a gamma curve. A scene referenced camera should be “measuring” and recording the scene how it actually is so it shouldn’t matter who makes it, they should all be recording the same thing. Of course in reality life is not that simple. Differences in the color filters, pixel design etc means that there are differences, but by using scene referred you eliminate the gamma curve and as a result a grade you apply to one camera will look very similar when applied to another, making it easier to mix multiple cameras within your workflow.

 

Shooting the Shwe Dagon Pagoda in 4K raw (or how to edit and grade on a laptop).

I was recently invited to talk about 4K at a Sony event in Myanmar. Rather than just standing up and talking I always like to use practical demonstrations of the things I am talking about. So for this particular workshop I decided to go to one of the local landmarks the day before the event, shoot it in 4K, edit that footage and then grade it in order to produce a short 4K film. The object being to prove that Sony’s 4K raw is not something to be afraid of. It’s actually quite manageable to work with, even with just a laptop.

Having just flown in to Myanmar from a workshop in Vietnam, I was travelling light in order to keep my excess baggage charges to a minimum and to avoid too much aggravation at customs. In total I had about 35kg of luggage including enough clothes for two weeks on the road.

My very minimal equipment for this mini project comprised of a Sony PMW-F5 camera with AXS-R5 recorder. I used an MTF FZ to Nikon lens adapter and a Sigma 24-70mm f2.8 DSLR lens. The tripod was the excellent Miller Solo with a Compass 15 head. Power came from a couple of Lith 150Wh batteries. A really basic shooting kit but one that can produce remarkably good results. The weakest part of the kit is the lens, I really could have done with a wider lens and the Sigma is prone to flare, so I’m open to suggestions for a better budget zoom lens.

The shoot was surprisingly easy. I’ve heard many stories of Myanmar (Burma) being a closed country, but I had no issues shooting at the temple or around the city of Yangon other than curious onlookers as a large camera like the F5 is a rare sight for the locals.

I shot in 4K raw, I love the post production flexibility that the raw footage brings. In order to keep image noise to a minimum and also to keep exposure easy I used a MLUT  (Lut 2) and 640 EI ISO. I know that when I shoot at 640 EI and use 100% zebras that I can expose nice and bright in the viewfinder and just keep an eye out for zebras just starting to appear. With the F5 at 640 EI, 100% zebras will show just a little before clipping, so as long as you only have the very tiniest amount of zebra on your brightest highlights your exposure will be fine, nice and bright but not clipped.

During the day I spent a couple of hours at the temple and then another hour at the temple in the evening. In the YouTube video you will also see a time-lapse shot. This was done after the workshop and was not included in the original edit.

Once back at the hotel the first stage was to transfer everything from the AXS card to a hard drive. For my travel shoots I use Seagate 2TB USB 3 drives. These are 3.5″ drives so require mains power, 2.5″ drives are not really fast enough for 4K raw editing. My hotel bedroom had one power socket on one side of the room and another on the other side of the room. So with the AXS-CR1 card reader plugged into one and the hard drive into the other I had to sit on the floor in the middle of the room with my laptop running on it’s battery while I transferred the files, about an hours worth of clips. This took about 40 minutes.

Once the clips were on the hard drive I could begin the edit. I could have used the XDCAM HD files from the camera as proxies for the edit, but I find it just as easy to use the raw files. My laptop is an off-the-shelf 15″ Retina MacBook Pro with 8GB of ram. I use Adobe Premiere CC with Sony’s raw plugin for the edit. One thing I have found necessary is to re-boot the computer before editing the raw files. I find Premiere more stable if I do this.

Set playback resolution to 1/4
Set playback resolution to 1/4

To edit Sony’s 4K raw I use one of the Sony 4K raw presets that get installed when you add the Sony raw plugin. The other thing I have to do is to drop the resolution of the clip viewer and timeline viewer windows to 1/4. This really isn’t a big deal as 1/4 of 4K is HD and when just using the laptops screen I’m not viewing the small viewer windows at a very high resolution anyway. Editing the 4K raw is smooth and painless. Dissolves and effects can be a little jumpy as you try to pull 2 streams of 4K of the single hard drive, but for cuts only or a simple edit it’s really not a problem.

Once I’m happy with the picture cut I export an AAF file from Adobe Premiere. I then close Premiere and start DaVinci Resolve. I use the full paid version as I often want to export in 4K. The free Lite version will happily edit and grade Sony’s 4K raw, but you can only export at up to HD resolution.

Initially I set my project setting to HD as this gives smoother playback. I then open the AAF file that I saved in Premiere. Resolve will ask for a location to search for the clips, so just navigate to the parent folder of the directory where your clips are stored and click “search”. After a short wait your Premiere edit will open in a timeline in Resolve. Now you can go to the “Color” room in Resolve to grade your footage. If your using a low power system like a laptop you may want to go to the project settings and under the raw settings, choose “Sony Raw” and set the De-Bayer to half or quarter. This will help make playback smoother and faster but sacrifices a little image quality. Don’t worry though, we can force Resolve to do a full resolution De-Bayer when we are ready to export the graded clips.

I’m not going to teach you how to grade here. I’m not a colourist, fortunately resolve is pretty straight forward and I can now quickly create a look, save that look and apply it to multiple clips and then go back and tweak and refine the grade where needed, perhaps adding secondary corrections here and there. For the Shwe Dagon video there were only a couple of shots where I used secondaries, these included shots with dark interiors. The overall grade was pretty straight forward.

Once I was happy with the look of the shots I went to the project settings and changed the project resolution back to 4K.  I then used the “deliver” room in Resolve to export the clips. To keep life simple I exported the grade as individual clips with the same file names as the original clips using 4K ProRes HQ to a new folder on my USB 3 hard drive. I also check the “force full resolution debayer” check box to make sure that the quality of the renders is as good as it can be. Rendering the files from Resolve on my MacBook is not a real time process. I get around 5 frames per second, so a minute of footage takes about 5 or 6 minutes. The Shwe Dagon video is a little over 4 minutes so rendering out the graded shots took about half an hour.

Once the render in resolve is completed I then exit Resolve and go back to Premiere. In between I re-boot the laptop. Back in the original edit project I simply import the resolve render files and swap the raw clips in the timeline with the graded clips. I then add any titles or other effects in Premiere before finally exporting the finished piece in the codecs I need using Adobe’s media encoder. For YouTube I export the clips as 4K .mp4 files with a bit rate of 50-75Mb/s.

It really is possible to edit and grade Sony’s 4K raw on a laptop. It’s not particularly painful to do. I wouldn’t want to do a long or complex project this way, but for short simple projects it’s really not a big deal. If you get a BlackMagic thunderbolt MiniMonitor box you can use any HDMI equipped TV as an external monitor. Sony’s 4K raw is easy to work with, the biggest headache is simply the size of the files. At 500GB per hour at 24/25fps there’s a lot of data to manage, but this is no more than uncompressed RGB HD. In the office I have a workstation with a pair of NVIDIA GTX570 graphics cards, these graphics cards give me enough video processing power to work with 4K raw at full resolution in real time.

 

 

PMW-F5 and F55 firmware released. 4K shooting in Myanmar, Saigon Film School.

I’m currently sitting in the airport lounge at Bangkok airport, on my way to Taiwan for a training event tomorrow (teaching local camera operators to become trainers). So I thought I’d take a few minutes to catch up on things.

The big news is the release of the firmware version 2.0 for Sony’s F5 and F55 cameras. All I can say is wow! A huge number of new features, way to many to list here. Of course the biggie is 240fps super slow-mo 2K raw. Also we get XAVC high speed at up to 120fps (eventually this will go up to 180 fps). There’s the ability to use XQD media which is a fair bit cheaper than SxS and a great new focus tool. The focus assist mode provides you with a “sharpness” bar graph that you can use to check the focus of objects in the center of the field of view. It’s much more precise than peaking and a really great tool to have on a 4K camera. For exposure there is now a very clear waveform monitor display as well as a histogram. If you have the OLED EVF then there is also the addition of false color (although the EVF has to go back to Sony for an update to enable this).

The Audio control button for the side LCD now works and you get easy, direct access to all the major audio functions. In addition you can now change the EI gain from the side LCD. All 4 HDSDI outs now work together giving you two clean HDSDI outs plus two with overlays. Furthermore S-Log2 has been added as a new Look Up Table when shootin in EI mode (don’t forget  you also get S-Log2 out of the AUX HDSDI on the R5 anyway, one way of having 709 + S-Log together). You can also now get standard definition out of SDI 3/4 and the Test out.

For the DiT’s out there there is now a user gamma page where you can roll your own gamma curve, although I have not had time to play with this function yet.

All in all this is a massive update for these cameras and really transforms the camera (not that is was bad before hand). All I need now is to get the new 2K Optical Low Pass Filter. You can download the update files from here:

PMW-F55 V2 Update.

PMW-F5 V2 Update.

AXS-R5 V2 Update.

Do note that you must do an “All Reset” immediately after the update and this update cannot be rolled back!

One of the many temples at the Shwe Dagon Pagoda, Myanmar. This is a 4K frame grab.
One of the many temples at the Shwe Dagon Pagoda, Myanmar. This is a 4K frame grab.

Last week I was in Yangon, Myanmar running some short half day workshops on Sony cameras. We had about 200 people through the workshops over a couple of days. In between I was able to go out and shoot the Shwe Dagon Pagoda in 4K. It’s a beautiful place, covered in gold bars that sparkle in the sun and diamonds that refract the light from the lights at night into a myriad of colors. I did a quick edit in Myanmar to show at the workshops, but as soon as I get home I’ll finish it off and get it up on to YouTube in 4K. It looks fantastic. A big thank you from me to the team at TMW Enterprises for looking after me so well.

Before that I was in Vietnam running a 3 day workshop for Saigon Film School. What a great bunch. Had a really good time and the 3 short films that the students produced over the course of the workshop all look great. Nice to see such enthusiasm and nice to see many of the techniques I taught put to good use in the films. I hope to go back in the new year to do something even bigger, but only if the faculty staff promise not to get me quite so drunk with “bottoms up” giant glasses of wine at the after school party.

Coming up: I’m preparing some articles on the difference between “Scene Referred” and “Display Referred” shooting and workflows. This is very relevant to anyone shooting raw or looking at implementing an ACES workflow. Also a back to basics tutorial on getting the very best from a lens, whether thats a built in zoom or a prime lens. These should go online in the next week. After that I have a big TV commercial shoot.

PMW-F5 and PMW-F55 Version 2 firmware update.

Just a brief update from IBC. Firmware version 2.0 will be released at the end of September and will include the high speed raw shooting modes, 2K at up to 240fps. In addition there will be the addition of waveform, vectorscope and histogram.

The Audio button and File bottom on the side panel will be become active. The audio button adding 2 pages of Audio control functions including quick switching for each channel between manual and auto as well as manual level control via the large menu knob. The file button gives quick access to load and save a number of “all files” so you can quickly switch between different camera setups.

Further new features are full support for the Fujinon Cabrio lens (the lens will need a firmware update too) with zoom control and rec start/stop. Support for Sony’s new LA-FZB1 (5000 Euro approx) and FZB2 (9000 Euro approx) lens adapters for B4 lenses.

This is a very significant update for the cameras and includes a lot of other smaller new features.

When shooting 2K the camera uses the full sensor, but it is read in a different way, one that create larger “virtual” pixels (my words, not Sony’s). This means that as the sensor is now operating as a 2K sensor that the factory fitted 4K optical low pass filter (OLPF) is not optimum for controlling aliasing and moire. Sony will be offering (for sale) a drop in replacement 2K OLPF. The 2K OLPF will control aliasing and moire at 2K as well as providing a softer look at 4K for those wanting this. It is almost essential for the 2K high speed modes and gives a smoother look at 4K that works well for beauty, cosmetic, period drama and similar projects. I could not get a price for the 2K OLPF but I have been assured that it will be affordable (even for me and my small budget).

Version 3 Firmware will be released at the end of year, probably a little after Christmas. Version 3 will add the compressed XAVC high speed modes as well as a new feature which is a 2K super 16mm crop mode. The S16 crop mode will allow you to use S16 PL mount lenses or B4 zoom lenses using the MTF FZ-B4 adapter without the 2x extender and only a 0.3 stop light loss. Also included will be AES/EBU digital audio and other features not listed here.

Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).

There is a lot of confusion and misunderstanding about aliasing and moiré. So I’ve thrown together this article to try and explain whats going on and what you can (or can’t) do about it.

Sony’s FS700, F5 and F55 cameras all have 4K sensors. They also have the ability to shoot 4K raw as well as 2K raw when using Sony’s R5 raw recorder. The FS700 will also be able to shoot 2K raw to the Convergent Design Odyssey. At 4K these cameras have near zero aliasing, at 2K there is the risk of seeing noticeable amounts of aliasing.

One key concept to understand from the outset is that when you are working with raw the signal out of the camera comes more or less directly from the sensor. When shooting non raw then the output is derived from the full sensor plus a lot of extra very complex signal processing.

First of all lets look at what aliasing is and what causes it.

Aliasing shows up in images in different ways. One common effect is a rainbow of colours across a fine repeating pattern, this is called moiré. Another artefact could be lines and edges that are just a little off horizontal or vertical appearing to have stepped or jagged edges, sometimes referred to as “jaggies”.

But what causes this and why is there an issue at 2K but not at 4K with these cameras?

Lets imagine we are going to shoot a test pattern that looks like this:

Test pattern, checked shirt or other similar repeating pattern.
Test pattern, checked shirt or other similar repeating pattern.

And lets assume we are using a bayer sensor such as the one in the FS700, F5 or F55 that has a pixel arrangement like this, although it’s worth noting that aliasing can occur with any type of sensor pattern or even a 3 chip design:

Sensor with bayer pattern.
Sensor with bayer pattern.

Now lets see what happens if we shoot our test pattern so that the stripes of the pattern line up with the pixels on the sensor. The top of the graphic below represents the pattern or image being filmed, the middle is the sensor pixels and the bottom is the output from the green pixels of the sensor:

Test pattern aligned with the sensor pixels.
Test pattern aligned with the sensor pixels.

As we can see each green pixel “see’s” either a white line or a black line and so the output is a series of black and white lines. Everything looks just fine… or is it? What happens if we move the test pattern or move the camera just a little bit. What if the camera is just a tiny bit wobbly on the tripod? What if this isn’t a test pattern but a striped or checked shirt and the person we are shooting is moving around? In the image below the pattern we are filming has been shifted to the left by a small amount.

Test pattern miss-aligned with pixels.
Test pattern miss-aligned with pixels.

Now look at the output, it’s nothing but grey, the black and white pattern has gone. Why? Simply because the green pixels are now seeing half of a white bar and half of a black bar. Half white plus half black equals grey, so every pixel see’s grey. If we were to slowly pan the camera across this pattern then the output would alternate between black and white lines when the bars and pixels line up and grey when they don’t. This is aliasing at work. Imagine the shot is of a person with a checked shirt, as the person moves about the shirt will alternate between being patterned and being grey. As the shirt will be not be perfectly flat and even, different parts of the shirt will go in and out of sync with the pixels so some parts will be grey, some patterned, it will look blotchy. A similar thing will be happening with any colours, as the red and blue pixels will sometimes see the pattern at other times not, so the colours will flicker and produce strange patterns, this is the moiré that can look like a rainbow of colours.

So what can be done to stop this?

Well what’s done in the majority of professional level cameras is to add a special filter in front of the sensor called an optical low pass filter (OPLF). This filter works by significantly reducing the contrast of the image falling on the sensor at resolutions approaching the pixel resolution so that the scenario above cannot occur. Basically the image falling on the sensor is blurred a little so that the pixels can never see only black or only white. This way we won’t get flickering between black & white and then grey if there is any movement. The downside to this is that it does mean that some contrast and resolution will be lost, but this is better than having flickery jaggies and rainbow moiré. In effect the OLPF is a type of de-focusing filter (for the techies out there it is usually something called a birefringent filtre). The design of the OLPF is a trade off between how much aliasing is acceptable and how much resolution loss is acceptable. The OLPF cut-off isn’t instant, it’s a sharp but gradual cut-off that starts somewhere lower than the sensor resolution, so there is some undesirable but unavoidable contrast loss on fine details. The OLPF will be optimised for a specific pixel size and thus image resolution, but it’s a compromise. In a 4K camera the OLPF will start reducing the resolution/contrast before it gets to 4K.

(As an aside, this is one of the reasons why shooting with a 4K camera can result in better HD, because the OLPF in an HD camera cuts contrast as we approach HD, so the HD is never as sharp and contrasty as perhaps it could be. But shoot at 4K and down-convert and you can get sharper, higher contrast HD).

So that’s how we prevent aliasing, but what’s that got to do with 2K on the FS700, F5 and F55?

Well the problem is this, when shooting 2K raw or in the high speed raw modes Sony are reading out the sensor in a way that creates a larger “virtual” pixel. This almost certainly has to be done for the high speed modes to reduce the amount of data that needs to be transferred from the sensor and into the cameras processing and recording circuits when using high frame rates.  I don’t know exactly how Sony are doing this but it might be something like my sketch below:

Using adjacent pixels to create larger virtual pixels.
Using adjacent pixels to create larger virtual pixels.

So instead of reading individual pixels, Sony are probably reading groups of pixels together to create a 2K bayer image. This creates larger virtual pixels and in effect turns the sensor into a 2k sensor. It is probably done on the sensor during the read out process (possibly simply by addressing 4 pixels at the same time instead of just one) and this makes high speed continuous shooting possible without overheating or overload as there is far less data to read out.

But, now the standard OLPF which is designed around the small 4K isn’t really doing anything because in effect the new “virtual” pixels are now much larger than the original 4K pixels the OLPF was designed around. The standard OLPF cuts off at 4K so it isn’t having any effect at 2K, so a 2K resolution pattern can fall directly on our 2K virtual bayer pixels and you will get aliasing. (There’s a clue in the filter name: optical LOW PASS filter, so it will PASS any signals that are LOWer than the cut off. If the cut off is 4K, then 2K will be passed as this is lower than 4K, but as the sensor is now in effect a 2K sensor we now need a 2K cut off).

On the FS700 there isn’t (at the moment at least) a great deal you can do about this. But on the F5 and F55 cameras Sony have made the OLPF replaceable. By loosening one screw the 4K OLPF can be swapped out with a 2K OLPF in just a few seconds. The 2K OLPF will control aliasing at 2K and high speed and in addition it can be used if you want a softer look at 4K. The contrast/resolution reduction the filter introduces at 2K will give you a softer “creamier” look at 4K which might be nice for cosmetic, fashion, period drama or other similar shoots.

Replacing the OLPF on a Sony PMW-F5 or PMW-F55. Very simple.
Replacing the OLPF on a Sony PMW-F5 or PMW-F55. Very simple.

Fs700 owners wanting to shoot 2K raw will have to look at adding a little bit of diffusion to their lenses. Perhaps a low contrast filter will help or a net or stocking over the front of the lens to add some diffusion will work to slightly soften the image and prevent aliasing. Maybe someone will bring out an OLPF that can be fitted between the lens and camera body, but for the time being to prevent aliasing on the FS700 you need to soften, defocus or blur the image a little to prevent the camera resolving detail above 2K. Maybe using a soft lens will work or just very slightly de-focussing the image.

But why don’t I get aliasing when I shoot HD?

Well all these cameras use the full 4K sensor when shooting HD. All the pixels are used and read out as individual pixels. This 4K signal is then de-bayered into a conventional 4K video signal (NOT bayer). This 4K (non raw) video will not have any significant aliasing as the OLPF is 4K and the derived video is 4K. Then this conventional video signal is electronically down-converted to HD. During the down conversion process an electronic low pass filter is then used to prevent aliasing and moiré in the newly created HD video. You can’t do this with raw sensor data as raw is BEFORE processing and derived directly from the sensor pixels, but you can do this with conventional video as the HD is derived from a fully processed 4K video signal.

I hope you understand this. It’s not an easy concept to understand and even harder to explain. I’d love your comments, especially anything that can add clarity to my explanation.

UPDATE: It has been pointed out that it should be possible to take the 4K bayer and from that use an image processor to produce a 2K anti-aliased raw signal.

The problem is that, yes in theory you can take a 4K signal from a bayer sensor into an image processor and from that create an anti-aliased 2K bayer signal. But the processing power needed to do this is incredible as we are looking at taking 16 bit linear sensor data and converting that to new 16 bit linear data. That means using DSP that has a massive bit depth with a big enough overhead to handle 16 bit in and 16 bit out. So as a minimum an extremely fast 24 bit DSP or possibly a 32 bit DSP working with 4K data in real time. This would have a big heat and power penalty and I suspect is completely impractical in a compact video camera like the F5/F55. This is rack-mount  high power work station territory at the moment. Anyone that’s edited 4K raw will know how processor intensive it is trying to manipulate 16 bit 4K data.

When shooting HD your taking 4K 16 bit linear sensor data, but your only generating 10 bit HD log or gamma encoded data, so the processing overhead is tiny in comparison and a 16 bit DSP can quite happily handle this, in fact you could use a 14 bit DSP by discarding the two LSB’s as these would have little effect on the final 10 bit HD.

For the high speed modes I doubt there is any way to read out every pixel from sensor continously without something overheating. It’s probably not the sensor that would overheat but the DSP. The FS700 can actually do 120fps at 4K for 4 seconds, so clearly the sensor can be read in it’s entirety at 4K and 120fps, but my guess is that something gets too hot for extended shooting times.

So this means that somehow the data throughput has to be reduced. The easiest way to do this is to read adjacent pixel groups. This can be done during the readout by addressing multiple pixels together (binning). If done as I suggest in my article this adds a small degree of antialising. Sony have stated many times that they do not pixel or line skip and that ever pixel is used at 2K, but either way, whether you line skip or pixel bin the effective pixel pitch increases and so you must also change the OLPF to match.

A quick look at the Sony PXW-Z100 at IBC.

IBC is still in full swing and I’m very busy at the show, but I managed to spend a bit of time with the Z100 today. I was able to compare it with some of the other cameras on the camera set for comparison, but this is a very crude first look at a camera running beta firmware and picture settings. So it may not 100% represent the final product, but I do expect it to be pretty close.

Having played with the Z100 now, I have to say I am pleasantly impressed. It is not a sensitive as the PMW200 or an EX1, I estimate it’s about 1.5 stops less sensitive at 0db. But it is remarkably noise free with slightly less noise than a PMW-200 at 0db. Even at +9bd the (which brings it back up to similar sensitivity to a PMW-200/EX1 at 0db) the noise is not too bad. Fast pans at +9 or +12db will reveal some image smear due to the 3D noise reduction having to work harder, but it’s not too bad and usable for most applications.

I thought it would be worse than this. There must be a lot of noise reduction and processing taking place to produce this clean image, but overall the NR is very transparent and well executed. I estimate dynamic range at about 10 stops, maybe a little more but certainly nowhere near the 14 stops you can get from a camera like the F5 or FS700 in raw mode. .The PMW-300 on the Sony booth is showing more dynamic range than the Z100. I did expect this due to the small pixel size. The standard 709 gamma curve with knee works quite well. The Cinematone gammas don’t bring any more dynamic range as far as I can tell, but the highlight roll off is more pleasing and a little more natural looking with the Cinematone gammas.

My biggest reservation is focussing the camera with the built in viewfinder or LCD. The rear finder is really not up to the task of focussing for 4K. The LCD panel is better, but with no magnifier or monocular your going to have to have damn good close up eyesight to be able to use it for accurate focus at 4K. This is not an issue unique to this camera, as no camera I know of has a viewfinder better than 1080P and most are only 720P or 1/4HD (960×540 which is what I believe the Z100 is). But not having a magnifier makes this even worse than most. So, your almost certainly going to have to rely on autofocus to get the focus spot on in many situations. Fortunately the autofocus is fast and accurate. I think with these smaller cameras the use of autofocus will be common even for us old “I never use autofocus” operators, just as autofocus is now normal even for professional photographers. There is a good colored peaking function that works well and the deeper DoF from the small sensor does mean that focus errors are not quite as telling as on a large sensor camera. But even so the LCD, for me at least is far from ideal for good focus at 4K. I think your going to need to either add a 3rd part loupe or use and external finder such as the Alphatron with focus magnification.

Build quality is good, the camera feels very solid yet lightweight, even with a high capacity battery it is comfortably under 3kg. It uses the very common NP-F type batteries. Minor gripes are that the shoe on the handle in front of the LCD means that if you have a large light or radio mic attached to the shoe you can’t open and close the LCD panel.

The menu system is lifted straight from the PMW-F5 and F55 and most of the menu pages are very similar. Scene file settings are quite comprehensive and there is a lot of scope for fine tuning the pictures with matrix, detail and gamma settings. However as I said, no extended dynamic range Cinegammas or Hypergammas but you can adjust the knee and black gamma to fine tune your contrast range and dynamic range.

Overall, it’s better than I expected. The 4K images are sharp and clear, not overly sharpened and they look quite natural. At 0db the noise levels are very low and the image is quite clean, but sensitivity is lower than we expect from a modern HD camera (no big surprise). Dynamic range is also a little lower than you can get from a good 1/2″ camera but not significantly so. I think Sony have done a good job of squeezing as much as they can from this small size sensor with very small pixels. The 20x zoom seems to stay nice and sharp across the zoom range, even out in the corners. As an F5 owner there have been many occasions when I have longed for a sharp 20x zoom that I can use when shooting 4K. That’s probably something I’ll never be able to afford for my F5, but the Z100 opens up the possibility of having that wide zoom range and 4K. Providing the scene isn’t too dark or too contrasty the Z100 would allow me to get those shot for a lot, lot, lot less money than very big, very heavy PL mount zoom.

Cinematographer and film maker Alister Chapman's Personal Website