Sony recently released a set of 4 cube LUT’s (Look Up Tables) for use with SLog2. You can download those LUT’s here: DaVinciResolveCubeFiles. In addition there are many other LUT’s that you can use with SLog2 to help create different looks. Most LUT’s designed for SLog and Arri LogC also work reasonably well with S-Log2. In this article I’m going to look at how you can use these both on set and in DaVinci Resolve. Currently on set you cannot upload LUT’s to the F55/F5 or FS700, so if you want to use the LUT’s to alter your monitor output you need to use some additional hardware. One of the most affordable solutions is the Black Magic Design HDLink Pro. This $500 box has HDSDI inputs and outputs as well as other output options including DisplayPort to which you can connect either a DVI or HDMI monitor with the appropriate adapter.
By placing the HDLink between the cameras HDSDI out and your monitor you can apply a LUT by connecting a computer to the HDLink via USB and using the HDLink software utility to import the LUT to the Black Magic box. If you don’t have a LUT you can use the HDLink software to adjust many parameters of the picture to create your desired look live on set. There is one limitation however, without a LUT, you cannot adjust the overall saturation of the image, so when shooting with SLog2 and SGamut the pictures will not have the full saturation (colour) of your final graded output. To compensate for this you can always turn up the saturation on the monitor, just remember to restore the saturation control back to normal before you put the monitor away at the end of the shoot! When using a 3D LUT like the cube LUT’s linked above you do get a full correction including saturation. The computer (which can of course be a laptop) does not need to remain connected to the HDLink. Once you have uploaded the LUT to the box and are happy with your look you can disconnect the computer. The HDLink will remember the LUT and setting. Whenever you use it, it’s always a good idea to plug a computer back in from time to time to check how it is set especially if your making exposure adjustments using the LUT’d output.
If the plan is to use the same LUT both on set and in the grade then you must set up the camera to output correctly. Most LUT’s are designed for use with Log recording, so this means that in the majority of cases the camera will need to be set to output SLog2 (Slog with the PMW-F3). If you are shooting raw using the Sony AXS-R5 recorder then you can take the AUX out from the R5 and use that to feed the HDLink box. This output is a real-time de-bayer of the raw recording with SLog2 applied. By using this output you can still use the F5 and F55’s built in LUT’s if you needed as the built in LUT’s are not applied to the R5’s AUX output. If you do use EI Gain then this will have an impact on your LUT as the recordings (and AUX output) will most likely be exposed brighter, but the result should be similar for both the on-set view via the HDLink and in post production. For FS700 users shooting raw with the R5 the additional AUX output is the only way to feed HDSDI to the HDLink as the cameras HDSDI is used to feed the raw data to the R5.
After the shoot, to use the LUT in DaVinci Resolve, first you must place the LUT or LUT’s, in the correct library folder before opening Resolve. The LUT’s must be saved in the .CUBE format to the Cinespace folder or a new sub directory in:
Mac:
System Drive/Library/Application Support/Black Magic Design/DaVinci Resolve/LUT/
On a PC the folder may be hidden, if so go to windows explorer and select “organize” then “view” and click “show hidden folders”.
Once you have installed your LUT’s you can then open Resolve and import your SLog2 footage. If you are shooting raw with the AXS-R5 then you need to open the project settings and ensure that the raw files are being correctly displayed using SLog2 and SGamut. I do this by going to the “camera raw” page and set “Decode Using” to “Project” and then change the Gamut to “SGamut” and gamma to “SLog2”. This ensures all Sony raw clips will be treated as SLog2 even if you did use an internal camera LUT.
Next select the clip or clips that you wish to apply the LUT to and right click on the clip and select “3D LUT” and go to the “Cinespace” folder or the folder you created. There you should see your LUT’s listed. Simply choose the LUT you wish to use. You may need to add some gain or lift adjustment to tweak your images, but they should look as they did via the HDLink box.
You can also use Resolve to create a .cube LUT for use on set. Simply shoot some test Slog2 clips and grade them as you wish the final image to look. Then once your happy with your look, right click on the clip and “Export LUT”. Resolve will then create a .cube LUT that you can use with the HDLink.
Although this is really aimed at those shooting using log, this process will work with almost any camera and any gamma curve. The key thing to remember is to always use the same camera settings with LUT’s tailored to those particular settings. So for example you could use this with a PMW-200 using a Hypergamma or a Canon C300 using C-log or a DSLR. Provided the LUT was created to work with the way the camera is set up, it should work correctly. Just don’t expect a LUT designed for Log to work with a non Log camera.
Here are some better quality images of the PMW-300, including it’s base where there are 2x 1/4″ mounting threads plus the lever for adjusting the shoulder brace.
Here you can see the thumbscrew that attaches the viewfinder mounting bracket to the camera body. There is some side to side adjustment at this point. The viewfinder also slides off the end of the fore-aft adjustment rail by pulling out a small pin.
Adobe released Premiere CC today. This includes Media Encoder CC and both packages have support for Sony’s XAVC codec. It’s a major update so can be installed alongside Premiere 6 to allow for easy migration from one to the other.
Here are some pictures of the new PMW-300 taken at Broadcast Asia. I should be able to get some higher quality images after the show, when it’s no longer in the box. Good news is that the viewfinder is detachable. There are 2 HDSDI outs plus HDMI and dual RCA audio.
Rear connectors and extended shoulder pad. The camera will actually sit on your shoulder, but front heavy.
Ever since the launch of the PMW-200 people have been asking about whether the EX3 would also be replaced. With the EX3 being such a popular camera it wasn’t really a case of “if” but more of a case of “when”.
So here it is, the PMW-300. Like the PMW-200 this is an evolution of the EX1R/EX3 cameras with many similarities but with that all important 50Mb/s 422 broadcast XDCAM codec. Like the EX3 it has 1/2″ sensors and it uses the same EX3 type lens mount, so can use the same lenses as the EX3. As well as the 14x 1/2″ zoom there is also now a new 16x zoom. In addition via adapters you can use both 1/2″ hot-shoe lenses and 2/3″ B4 lenses (1.4x magnification). You can also use an adapter to use Nikon DSLR lenses (5x magnification) for long focal length shots, so it’s sure to be popular with wildlife and natural history shooters. This is almost certainly the smallest self contained broadcast quality camcorder that can take interchangeable lenses.
The shape and design of the camera is different to the upward curving EX3. The body is a very functional rectangular shape that sits up against your shoulder like the EX3. It incorporates extending flip down shoulder/chest pad for added stability. The viewfinder design is new, it has a higher resolution panel than the one in the original EX3 and is closer in design to the PMW-350 or PMW-F5 LCD viewfinder. It’s mounted to the body with a rotating arm, that allows about 4″ of forward, backward and height adjustment so adapting the camera for use with a full shoulder mount should be quite straightforward.
As this camera uses essentially the same sensors as the EX3, sensitivity and dynamic range will be little different. But a new noise reduction system that Sony are calling 3DNR which should offer lower noise especially in low light situations.
At launch the camera will have the Sony XDCAM codec built in, offering 50Mb/s 422, 35Mb/s 420 as well as both IMX and DVCAM in standard definition. So a great range of codec choices out of the box. Next year you’ll be able to add the new XAVC codec as an option. This will be the Long GoP version of the codec announced at NAB and also coming as an option to the PMW-400. Throw in features like Genlock, RCP remote control and not only is this a great camera for use in the field but it also becomes an interesting option for small or low cost studio applications.
For hooking up to external devices you have the usual HDSDI and HDMI outputs as well as Firewire/ILink for the HDV and DVCAM modes.
I’m quite sure this camera will be as successful as the EX3, maybe more so thanks to the out-of-the-box broadcast codec and ability to add the 10 bit XAVC codec next year. I hope to get hold of one very soon for a full review, as soon as I do I’ll let you know more about it.
So here it is… a short compilation of clips shot across 10 days in the US this May. To get these shots I drove over 3,500 miles criss crossing the states of Oklahoma, Texas, Nebraska, Kansas, Colorado and South Dakota. It was a trip that started and ended with some tragic events and left me quite unsure of my own emotions and thoughts with regard to storm chasing, something that has been a very big part of my career, business and life for nearly 15 years.
The aim of the trip was to start building up a library of 4K stock footage to supplement the extensive (200+ hours) of high quality HD storm and natural extremes footage that I already hold and sell worldwide, almost all of which was shot using Sony XDCAM camcorders of one type or another. To help share the costs I opened up the trip as a week long workshop and I was to be joined by Les from Scotland and Michael from Australia. A few days before my scheduled departure from the UK to Oklahoma I was looking at the long range weather forecasting models (a vital part of storm chasing) when I noticed that a highly dangerous weather pattern look set to hit Oklahoma the following day. A quick call to the airline and some frantic bag packing saw me heading out in a rush on the first available flight to Oklahoma City on May 19th.
My shooting kit included my PMW-F5 with R5 raw recorder, a selection of DSLR lenses (Canon mount), a Miller Solo tripod, media, batteries, chargers, and a whole bunch of storm chasing electronics and computers. When your packing in a hurry like this a check list can be a life saver. Forgetting something as simple as a cable when you won’t have time to find a replacement can ruin a shoot. 24 hours later, me and my 75Kg of gear were in Oklahoma City.
The morning of May 20th was like many spring mornings in Oklahoma. Warm, humid and a little overcast. The local TV stations were all warning of the possibility of severe storms, but this isn’t uncommon in tornado alley in the spring. I spent a couple of hours fitting all my storm chasing gadgets to the car and analysing weather data, trying to figure out where the best chances of seeing a storm or tornado would be. I didn’t need to go far. By lunchtime I was near Lawton in Oklahoma and soon after the first storms of the day started to get going. I followed a storm south of Oklahoma City that produced a brief tornado. I couldn’t find a safe place to stop and shoot it so I didn’t get any footage, frustrating! Meanwhile on the mobile weather radar in the car I could see another very strong storm approaching Oklahoma City. At 2.56pm this storm produced a large, violent tornado that struck the Oklahoma suburb of Moore. Listening to this unfold just a few miles away on local radio stations and watching it on my mobile radar was quite shocking. The storm had developed very quickly, very early in the day (storms don’t typically get going until early evening) and it was obvious it was going to be a killer. I didn’t chase it, it was in a busy city and congested roads and panicking people would make it a dangerous place to be.
That evening in my hotel the full story of the Moore tornado (http://en.wikipedia.org/wiki/2013_Moore_tornado) was on every TV channel, sadly 23 people were killed, over 12,000 homes were destroyed, 30,000 people displaced. While I love seeing the power and beauty of mother nature, it deeply saddens me when things like this happens, but happen they will whether I am there or not. Little did I know that terrible things would come even closer to home later in the week.
The next day and more storms were forecast, this time in Texas, so on with the storm chasing. I was shooting with my PMW-F5 with the R5 raw recorder docked on the back. The more I use this camera the more I like it. One of the big issues with storm chasing is the speed at which things change. So I needed an all round lens that could shoot wide panoramas one moment but then also get in tight for action shots. In addition I needed to be light and very portable. This meant using a DSLR super zoom. I was going to use a Sigma 18-250 but that went faulty just before I was due to leave home, so I used a Tamron 18-270mm lens (the Tamron focusses back to front which is why I prefer the Sigma). This is an image stabilised lens, very useful when shooting in high winds! To get the stabilisation to work you have to use a powered mount with electronic control. This means a Canon mount as no one makes an active Nikon mount. I used one of my prototype servo zoom handgrips with Canon iris and remote focus control. Other mount options would include the MTF Effect or Optitek Canon mount. I shot at 23.976p in 4K raw and XDCAM HD, this would give me a little over an hour of 4K on a single AXS card. Why XDCAM and not XAVC for the secondary (proxy) recordings? Well simply because I can edit the XDCAM material with any application, XAVC isn’t at the time of writing supported in Premiere and that’s what I currently edit with (it’s coming in Premiere CC due very, very soon). In addition a 32GB SxS card holds 60 mins of footage just like the 512GB AXS card and 24p raw, so I have the same clips on pairs of cards rather than all over the place.
The 3rd day was when I was joined by Les and Michael, it was also a chase day so an early start as we headed out to the Texas Pan Handle. The storms that formed that afternoon produced some very strong winds, hail the size of baseballs and dust. Tons and tons of dust from the parched Texas farmland was getting sucked into the storms and then blown back out again creating zero visibility sand storms. By the end of the day everything was covered in sandy, gritty dust. The cameras, the car and us. The F5 being solid state just carried on working despite the dust, but did require a good clean with a soft brush at the end of the day. If you have a dust covered camera don’t use canned air or compressed air to blow the dust off. The compressed air can blast dust in to the cameras interior and do a lot more damage than good. A soft paint brush will quickly remove dust from the cameras exterior. If you have dust on the optical port a gentle puff with a hand held puffer can be used to blow the dust off before you wipe it with a clean high quality lens cloth. Also keep your lens cloth in a sealed bag like a ziplock bag. Cleaning a lens or other optics with a dusty or gritty lens cloth is not a clever thing to do.
As the week progressed we were to see some incredible storms. One one night we witnessed one of the most impressive lightning shows that I have ever seen. A spinning Supercell thunderstorm was throwing out bolts of lightning every few seconds and we had a grandstand view. While the Sony F55 uses Frame Image Scanning to eliminate rolling shutter artefacts the F5 like most CMOS cameras does not, so it suffers from a degree of rolling shutter. A trick I learnt some time ago when shooting lightning, strobe lighting or flash photography with a CMOS camera is to use the slowest shutter speed possible. So this means turning the shutter off and using straight 23.976p for lightning during the day. At night I use a 2 frame slow shutter. Shoot like this and 90-95% of the lightning I shoot is not affected by rolling shutter effects. Sadly my budget wouldn’t stretch to the F55, I could only afford the F5. For my lightning shoot in Arizona later in the year I’ll probably hire an F55.
As the end of my planned storm chasing shoot drew near, while I had shot some amazing storm footage I had not yet captured a big tornado in 4K. With a lot of money invested in the shoot I was starting to feel a little disappointed. But the weather gods decided to play ball. My morning weather forecast had suggested Salina in Kansas as a good place to target for the day, so off to Salina we went. As we approached the town the first storms of the day started to fire. After briefly chasing one short lived storm we were soon parked up right in front of a second, almost stationary Supercell thunderstorm. You didn’t need to be a weather expert to see that this storm meant business. The clouds above us were swirling and turning. Just a short distance ahead a wall cloud had formed, this angry, looking low cloud was spinning rapidly and soon a small tornado formed. Trying to accurately expose when your in a hurry, fighting strong winds and have only moments to get the shot can be difficult at the best of times. I was shooting raw, so I was able to take advantage of the F5’s built in look-up tables and Cine EI gain. By dropping the EI gain to 800 EI (use 640 EI on the F55) and with just the smallest hint of zebra 2 (100%) starting to show on my brightest highlights I know that my exposure is good and bright but not quite clipping. This gives me nice low noise levels after grading and is an easy way to shoot.
The tornado didn’t last long, but then just a few minutes later a second tornado formed. This was a big one, a powerful one. In the viewfinder I could see it getting bigger and bigger, yet it wasn’t moving left or right. This isn’t normally a good sign, normally you only have a few moments to get a quick shot of the tornado before it’s time to run away, but this tornado barely moved at all, it was just simply getting bigger and bigger. It’s slow movement allowed me to get some great shots, some wide, some close up. Now I was happy! Following storm chaser tradition we celebrated that night with a steak diner.
At the end of each day I made a backup of my footage. Using a the Sony ACS-CR1 card reader, a retina MacBook Pro and a 3.5″ 2TB desktop hard drive I had space to backup up the equivalent of 4 full AXS cards, a little over 4 hours of material. A full card taking about 30 minutes to transfer. Once the cards were transferred to the 3.5″ drive a secondary copy was made to a 2.5″ drive overnight. The 2.5″ drives are much slower, but it’s easier to hand carry them on flights. The SxS cards were backed up to a NextoDI, NVS-Air. These are great stand-alone devices for backing up SxS cards. You simply pop the card into the slot in the NVS-Air’s side and select the backup mode you want, fast or secure and off it goes, backing up your card. A 32GB SxS card can take as little as 6 minutes to backup. It’s simple, it’s fast and you can even plug in a second drive to make two simultaneous copies.
Even though the weather pattern for the next few days was great for storms I had to break off the chase to go to Los Angeles for the Cinegear trade show. As I left Oklahoma City early on the 31st of May I was aware that there was a significant risk of tornadoes in Oklahoma that day. Oh well I thought, I’ve done well, got some great footage, time to move on.
Friday the 31st of May is a day that storm chasers around the world will remember for a long time. A Supercell storm near the town of El-Reno in Oklahoma exploded in size and ferocity. It produced the largest tornado ever recorded, 2.6 miles wide, a tornado that was erratic and violent, rated at EF5, the strongest tornado strength rating. The storm caught many chasers out, many literally driving for their lives creating storm chaser traffic jams on narrow roads as they tried to escape the rapidly expanding storm. Some didn’t make it. Several cars were swept up by the tornado. Tragically 4 storm chasers were killed by the storm. 3 of them I knew, one was a friend. The 3 chasers I knew were researchers measuring the wind speeds around tornadoes in an attempt to better understand them. They were some of the most experienced storm chasers out there. I think every storm chaser knew that one day a chaser would get killed by a storm, but no one expected it to be theses guys, highly experienced, professional, researchers. Not hung-ho adrenalin junkies just there for the thrill of getting as close as possible. I have to admit that this was a bit of a wake up call. I’m not a big risk taker and I do like to keep a a little distance from the storms, I’ve always had the greatest of respect for their power, in the future I’ll be avoiding chasing in areas where large numbers of chasers can lead to traffic jams and blocked roads, most notably around Oklahoma City in May.
Once back at home it was time to review the footage shot and to put together a short demo clip. Using my off-the-shelf 15″ Retina MacBook pro I cut together a short sequence using Premiere Pro with Sony’s raw plugin. I edited directly off the single 3.5″ hard drive, no raid or anything else. Once I was happy with the edit I exported an AAF file from Premiere which I then took in to DaVinci Resolve. I used Resolve to grade and finish the footage rendering it out overnight. It did take about 3 hours to render the finished 4K project, but I used a little noise reduction on many of the clips and this takes a lot of processing. Lets face it a laptop isn’t the best way to work with 4K material, but it can be done. I’m currently putting together a workstation specifically for Resolve that will have dual graphics cards to really boost the render performance. I have to say that I am delighted with the quality of the material. The detail in the corn fields is incredible, the lightning bolts are detailed and crisp. There are no clipped highlights in any shot. Now all I need to do is to go back through the entire 4 hours of footage that I have, clip it down into stock footage sized chunks and write all the keywords and metadata for the stock footage libraries and my clients.
PS: On my last day in LA I had an interesting discussion with a production company about a 4K, 3D storm shoot. Maybe I’ll be back chasing storms in July with a pair of F5’s!
Having just finished 3 workshops at Cinegear and a full day F5/F55 workshop at AbelCine one thing became apparent. There is a lot of confusion over raw and log recording. I overheard many people talking about shooting raw using S-log2 or people simply interchanging raw and log as though they are the same thing.
Raw and Log are completely different things!
Raw simply records the raw, unprocessed data coming off the video sensor, it’s not even a color picture as we know it, it does not have a white balance, it is just digital “1’s” and zeros coming straight from the sensor.
S-Log, S-Log2, LogC or C-Log is a signal created by taking the sensors output, processing it in to an RGB or YCbCr signal and then applying a log gamma curve. It is much closer to conventional video, in fact it’s actually very similar, like conventional video it has a white balance and is encoded into colour. S-Log etc can be recorded using a compressed codec or uncompressed, but even when uncompressed, it is still not raw.
So why the confusion?
Well, if you tried to view the raw signal from a camera shooting raw in the viewfinder it would look incredibly dark with just a few small bright spots. This would be impossible to use for framing and exposure. To get around this a raw camera will convert the raw sensor data to conventional video for monitoring. Many cameras including the Sony F5 and F55 will convert the raw to S-Log2 for monitoring as only S-Log2 can show the cameras full dynamic range. At the same time the F5/F55 can record this S-Log2 signal to the internal SxS cards. But the raw recorded on the AXS cards is still just raw, nothing else, the internal recordings are conventional video with S-Log2 gamma (or an alternate gamma if a look up table has been used). The two are completely separate and different things and should not be confused.
UPDATE: Correction/Clarification. OK, there is room for more confusion as I have been reminded that ArriRaw uses Log encoding as does RedCode. It is also likely that Sony’s raw uses data reduction for the higher stops (as Sony’s raw is ACES compliant it possibly uses data rounding for the higher stops). ArriRaw uses log encoding for the raw data to minimise data wastage, but the data is still unencoded data, it has not been encoded into RGB or YCbCr and it does not have a white balance or have gain applied, all of this is added in post. Sony’s S-Log, S-Log2, Arri’s LogC, Canon’s C-Log as well as Cineon are all encoded and processed RGB or YCbCr video with a set white balance and with a Log gamma curve applied.
Fantastic day today. Captured a large, strong tornado in 4K raw with my F5. Got some really great footage of the tornado to add to the previous days lightning and storm structure footage. Can’t wait to get into the edit suite to start grading and cutting this footage. Still have 2 more days of storm chasing to go!
Here’s a short sample of some lightning I shot today with my F5. Can’t wait until I get a bit more time to edit together the incredible footage I’ve obtained in 4K raw of severe thunderstorms.
The PMW-F5 and F55 are fantastic cameras. If you have the AXS-R5 raw recorder the dynamic range is amazing. In addition because there is no gamma applied to the raw material you can be very free with where you set middle grey. Really the key to getting good raw is simply not to over expose the highlights. Provided nothing is clipped, it should grade well. One issue though is that there is no way to show 14 stops of dynamic range in a pleasing way with current display or viewfinder technologies and at the moment the only exposure tool you have built in to the F5/F55 cameras are zebras.
My experience over many shoots with the camera is that if you set zebras to 100% and don’t use a LUT (so your monitoring using S-Log2) and expose so that your just starting to see zebra 2 (100%) on your highlights, you will in most cases have 2 stops or more of overexposure headroom in the raw material. Thats fine and quite useable, but shoot like this and the viewfinder images will look very flat and in most cases over exposed. The problem is that S-Log 2’s designed white point is only 59% and middle grey is 32%. If your exposing so your highlights are at 100%, then white is likely to be much higher than than the designed level, which also means middle grey and your entire mid range will be excessively high. This then pushes those mids into the more compressed part of the curve, squashing them all together and making the scene look extremely flat. This also has an impact on the ability to focus correctly as best focus is less obvious with a low contrast image. As a result of the over exposed look it’s often tempting to stop down a little, but this is then wasting a lot of available raw data.
So, what can you do? Well you can add a LUT. The F5 and F55 have 3 LUTS available. The LUTS are based either on REC709 (P1) or Hypergamma (P2 and P3). These will add more contrast to the VF image, but they show considerably less dynamic range than S-Log2. My experience with using these LUT’s is that on every shoot I have done so far, most of my raw material has typically had at least 3 stops of un-used headroom. Now I could simply overexpose a little to make better use of that headroom, but I hate looking into the viewfinder and seeing an overexposed image.
Why is it so important to use that extra range? It’s important because if you record at a higher level the signal to noise ratio is better and after grading you will have less noise in the finished production.
Firmware release 1.13 added a new feature to the F5 and F55, EI Gain. EI or Exposure Index gain allows you to change the ISO of the LUT output. It has NO effect on the raw recordings, it ONLY affects the Look Up Tables. So if you have the LUT’s turned on, you can now reduce the gain on the Viewfinder, HDSDI outputs as well as the SxS recordings (see this post for more on the EI gain). By using EI gain and an ISO lower than the cameras native ISO I can reduce the brightness of the view in the viewfinder. In addition the zebras measure the signal AFTER the application of the LUT or EI gain. So if you expose using a LUT and zebra 2 just showing on your highlights and then turn on the EI gain and set it to 800 on an F5 (native 2000ISO) or 640 on an F55 (native 1250ISO) and adjust your exposure so that zebra 2 is once agin just showing you will be opening your aperture by 1.5 (F5) or 1 (F55) stop. As a result the raw recordings will be 1.5/1 stop brighter.
In order to establish for my own benefit which was the best EI gain setting to use I spent a morning trying different settings. What I wanted to find was a reliable way to expose at a good high level to minimise noise but still have a little headroom in reserve. I wanted to use a LUT so that I have a nice high contrast image to help with focus. I chose to concentrate on the P3 LUT as this uses hypergamma with a grey point at 40% so the mid range should not look underexposed and contrast would be quite normal looking.
When using EI ISO 800 and exposing the clouds in the scene so that zebras were just showing on the very brightest parts of the clouds the image below is what the scene looked like when viewed both in the viewfinder and when opened up in Resolve. Also below is the same frame from the raw footage both before and after grading. You can click on any of the images to see a larger view.
As you can see using LUT P3 and 800 EI ISO (PMW-F5) and zebra 2 just showing on the brightest parts of the clouds my raw footage is recorded at a level roughly 1.5 stops brighter than it would have been if I had not used EI gain. But even at this level there is no clipping anywhere in the scene, so I still have some extra head room. So what happens if I expose one more stop brighter?
So, as you can see above even with zebras over all of the brighter clouds and the exposure at +1 stop over where the zebras were just appearing on the brightest parts of the clouds there was no clipping. So I still have some headroom left, so I went 1 stop brighter again. The image in the viewfinder is now seriously over exposed.
The lower of the 3 images above is very telling. Now there is some clipping, you can see it on the waveform. It’s only on the very brightest clouds, but I have no reached the limit of my exposure headroom.
Based on these tests I feel very comfortable exposing my F5 in raw by using LUT P3 with EI gain at 800 and having zebra 2 starting to appear on my highlights. That would result in about 1.5 stops of headroom. If you are shooting a flat scene you could even go to 640 ISO which would give you one safe stop over the first appearance of zebra 2. On the F55 this would equate to using EI 640 with LUT P3 and having a little over 1.5 stops of headroom over the onset of zebras or EI 400 giving about 1 stop of headroom.
My recommendation having carried out these tests would be to make use of the lower EI gain settings to brighten your recorded image. This will result in cleaner, lower noise footage and also allow you to “see” a little deeper into the shadows in the grade. How low you go will depend on how much headroom you want, but even if you use 640 on the F5 or 400 on the F55 you should still have enough headroom above the onset of zebra 2 to stay out of clipping.
Cinematographer and film maker Alister Chapman's Personal Website