Understanding Sony’s SLog3. It isn’t really noisy.

It’s been brought to my attention that there is a lot of concern about the apparent noise levels when using Sony’s new Slog3 gamma curve. The problem being that when you view the ungraded Slog3 it appears to have more noise in the shadows than Slog2. Many are concerned that this “extra” noise will end up making the final pictures nosier. The reality is that this is not the case, you won’t get any extra noise using Slog3 over Slog2 and Slog3 is generally easier to grade and work with in post production.

So what’s going on?

Slog3 mimics the Cineon Log curve. As a result the shadow and low key parts of the scene are shown and recorded at a brighter level than Slog2. Because the shadows are brighter, the noise in the shadows appears to be worse. It isn’t. The noise level might be a bit higher but the important thing, the ratio between wanted picture information and un wanted noise is exactly the same whether in Slog2 or Slog3 (or in fact any other of the cameras gamma curves at the native ISO).

Let me explain:

The signal to noise ratio of a camera is determined almost entirely by the sensor. This is NOT changing between gamma curves.

The other thing that effects the signal to noise ratio is the exposure level, or to be more precise the aperture and how much light falls on the sensor. This should be same for Slog2 and Slog3. So again no change there.

As these two key factors do not change when you switch between Slog2 and slog3, there is no change in the signal to noise ratio between Slog2 and Slog3. It is the ratio between wanted picture information and noise that is important. Not the noise level, but the ratio. What people see when they look at ungraded SLog3 is a higher noise level (because the signal levels are also higher), but the ratio between the wanted signal and the noise is actually no different for both Slog2 and Slog3 and it’s that ratio that will determine how noisy your pictures will be after grading.

Gamma is just gain, nothing more, nothing less, just applied by variable amounts at different levels. In the case of log, the amount of gain decreases as you go further up the curve.

Increasing or decreasing gain does NOT significantly change the signal to noise ratio of a digital camera (or any other digital system). It might make noise more visible if you are amplifying the noise more, for example in an under exposure situation where you add gain to make a very dark object brighter. But the ratio between the dark object and the noise does not change, it’s just that as you have made the dark object brighter by adding gain, you have also made the noise brighter by the same amount, so the noise also becomes brighter and thus more obvious. The ratio between the wanted signal and the unwanted noise remains constant, no matter what the gain, it is a ratio and gain does not change ratios. With Slog3 in post production you will need less gain in the shadows than you would with Slog2 and this negates the extra gain that the camera adds to the shadows when shooting SLog3.

Lets take a look at some Math. I’ll keep it very simple, I promise!

Just for a moment to keep things simple, lets say some camera has a signal to noise ratio of 3:1 (SNR is normally measured in db, but I’m going to keep things really simple here).

So, from the sensor if my picture signal is 3 then my noise will be 1, or if my picture signal is 6 then my noise will be 2.

If I apply Gamma Curve “A” which has 2x gain then my picture becomes (6×2) 12 and my noise (2×2) 4. The SNR is 12:4 = 3:1

If I apply Gamma Curve “B” which has 3x gain then my picture becomes (6×3) 18 and my noise becomes (3×2) 6. The SNR is 18:6 = 3:1 so no change to the ratio, but the noise is 6 compared to the 4 of Gamma “A”, as a result Gamma “B” will appear to be noisier when viewed on a monitor.

Now we take those imaginary clips in to post production:

In post we want to grade the shots so that we end up with the same brightness of image, so lets say our target level after grading is 15.

For the gamma “A” signal we need to add 1.25x gain to take 12 to 15. As a result the noise now becomes (1.25 x 4) 5.

For the gamma “B” signal (our noisy looking one) we need to use 0.8333x gain to take 18  to 15. As a result the noise now becomes (0.83333 x 6) 5.

Notice anything? In both cases the noise in the final image is exactly the same.

OK, so that’s the theory, what about in practice?

Take a look at the images below. These are 400% crops from larger frames. Identical exposure, workflow and processing for each. You will see the original Slog2 and SLog3 plus the Slog 2 and Slog 3 after applying the LC-709 LUT to each in Sony’s raw viewer. Nothing else has been done to the clips. You can “see” more noise in the raised shadows in the untouched SLog3, but after applying the LUTs the noise levels are the same. This is because the Signal to Noise ratio of both curves is the same and after adding the LUT’s the total gain applied (camera gain + LUT gain) to get the same output levels is the same.

Slog2-400
Slog3-400Slog2-to-709-400Slog3-to-709-400

It’s interesting to note in these frame grabs that you can actually see the improvement in shadow detail that SLog3 brings. The bobbles and the edge of the picture frame look better in the Slog3 in my opinion. A little bit more shadow data has given a more pleasing result with fewer artefacts.

The only way you can alter the SNR of the system (other than through electronic noise reduction) is by changing the exposure, which is why EI is so important and so effective.

Noise is always most problematic in shadows and low key. As we are putting more data into the shadows with SLog3 we are in effect recording the noise in the shadows more precisely, you won’t enhance it or increase it. All that will happen is that it is more accurately reproduced with fewer artefacts, which is a good thing.

In addition Slog3 has a near straight line curve. This means that in post production it’s easier to grade as adjustments to one part of the image will have a similar effect to other parts of the image. It’s also very, very close to Cineon and to Arri Log C and in many cases LUT and grades designed for these gammas will also work pretty well with SLog3.

The down side to Slog3?

Very few really. Fewer data points are recorded for each stop in the brighter parts of the picture and highlights compared to Slog2. As a result Slog3 is slightly less forgiving of overexposure than Slog2. You probably don’t want to push your EI gain quite as hard with Slog3. 1.5 stops over should be OK (so using an EI 1.5 stops down from native) but 2 or more will hurt your pictures.

Want to pick my brain for 10 days, fancy an adventure and a chance to see and shoot some very cool sights? Why not join me for a storm chasing adventure. 

New Extended Version Supercell Video.

Storm chasing season is on the way and I will be off to the USA to shoot landscapes, storms and maybe tornadoes in May. If you fancy a bit of an adventure and want to shoot stuff like this why not join me? See this link for more info. In the mean time why not take a look at this extended and re-graded version of the Supercell storm video I shot last May. It’s on YouTube in 4K if you select “2160” as the image size. Just wish YouTube wouldn’t compress stuff so much.

Are You Screwing Up Your Footage In Resolve?

First of all let me say that DaVinci Resolve is a great piece of software. Very capable, very powerful and great quality. BUT there is a hidden “Gotcha” that not many are aware of and even more are totally confused by (including me for a time).

This has taken me days of research, fiddling, googling and messing around to finally be sure of exactly what is going on inside Resolve. I am NOT a Resolve expert, so if anyone thinks I have this wrong do please let me know, but here goes……

These are the important things to understand about Resolve.

Internally Resolve Always Works With Data Levels (bit 0 to bit 1023 or CV0-CV1023 – CV stands for Code Value).

Resolve’s Scopes Always Measure The Internal Data Levels – These are NOT necessarily the Output Levels.

There Are 3 Data Ranges Used For Video – Data CV0 to CV1023, Legal Video 0-100IRE = CV64 to CV940 and Extended Range Video 0-109IRE CV64 to CV1023 (1019 over HDSDI).

Most Modern Video Cameras Record Using Extended Range Video, 0-109IRE or CV64 to CV1019.

Resolve Only Has Input Options For Data Levels or Legal Range Video. There is no option for Extended Range video.

If Transcoding Footage You May Require Extended Range Video Export. For example converting SLog video or footage from the majority of modern cameras which record up to 109IRE.

Resolve Only Has Output Options For Data Levels or Legal Range Video. There is no simple option to output, monitor or export using just the 64 to 1019 range as a 64 to 1019 range.

So, clearly anyone wanting to work with Extended Range Video has a problem. Not so much for grading perhaps, but a big issue if you want to transcode anything. Do remember that almost every modern video camera makes use of the full extended video range. It’s actually quite rare to find a modern camera that does not go above 100IRE.

So why not just use data levels for everything? Well that is an option. You can set your clips attributes (in the media pane) to Data Levels, set you monitor output to Data Levels and when you render choose Data Levels. In fact this is what YOU MUST DO if you want to convert files from one format to another without any scaling or level shifts. But be warned, never, ever, grade like this unless you add a Soft Clip LUT (more on that in a bit) as you will end up with illegal super blacks, blacks that are blacker than black and will not display correctly on most devices.

There are probably an awful lot of people out there using Resolve to convert XAVC or other formats to ProRes and in the process unwittingly making a mess of their footage, especially SLog2 and  hypergammas.

On input you can choose clip attributes for Data 0-1023 or Video 64-940 as well as Auto (in most cases if Resolve detects luma levels under 64 the footage is treated as Data, otherwise video levels). Anything set to video levels or detected as video levels gets scaled from the sources  CV64-940 range to Resolve’s internal CV0-1023 range.

As Resolves waveform/vector scopes etc always measure the internal scaled range there is no way to tell just by looking at the scopes what range your original material was in or whether it’s been scaled. If you do want to check the range of the source clip, try reducing the video level in the colour panel. If your clip is extended range then you should be able to se the previously hidden high range by pulling the levels down. A legal range clip on the other hand will have nothing above Resolves 1023 so the peak level will just drop.

On output you can choose Data 0-1023 or Legal Video 64-960 for your output or monitoring range (Resolve uses 960 which is the CbCr max value, Y is 940). For Resolve to handle the majority of modern cameras and many modern workflows where outputting 64-1023  may be required, there is no option!!!!!! So if you are working with video levels, anything you want to work with using extended range ends up either scaled on input or clipped/range restricted, blacks crushed, on output.

For example:

Import Hypergamma or SLog which is 64-1023, don’t touch or grade the footage, then export using video levels and the range is clipped and will no longer have the highlights recorded above 100IRE in the original. The original input files will be CV64-1023 but the video range output files will be CV64-940, the range is clipped off at 940 (100IRE).  If you set the clip attributes to “video 64-940” then on input CV940 is mapped to CV1023 in Resolve, so anything you shot between 100 and 109IRE (940-1019) goes out of range and is not seen on the output (It’s still there inside Resolve, but you can’t get to it unless you grade the footage). There just isn’t a correct option to pass through Full Range video 1:1, unless you use data in, data out, but then you run the risk of having illegal super blacks. If you leave the clip attributes as video and the export using Data Levels then your original CV64 black gets pulled down to CV0 so your blacks are crushed, however you do then retain the stuff above 100IRE.

If you’re using Resolve to convert XAVC SLog2 or SLog3 to something else, ProRes perhaps, this means that any Look Up Tables used in the downstream application will not behave as expected because your output clip will have the wrong levels. So for file conversions you MUST use data levels on the input clip attributes and data levels on output to pass the video through as per the original, even though you are working with footage that complies with perfectly correct, Extended Range video standards. But you must never edit or grade like this as you will get super blacks on your output….. Unless you generate a soft clip LUT.

 If you import a full range video clip that goes from CV64 to CV1019(1023) (0 to 109IRE) and do nothing to it then it will come out of Resolve as either data levels CV0 to CV1023 (-7IRE to 109IRE) or legal video CV64 to CV940 (0 – 100IRE), neither of which is ideal when transcoding footage. 

 So what can you do if you really need an Extended Range workflow? Well you can generate a Soft Clip LUT in Resolve to manage your output range. For this to work correctly you need to work entirely with data levels. Clip attributes must be set to Data levels, Monitor out to Data Levels and Exports should be at Data Levels. This is NOT necessary for direct 1:1 transcoding as the assumption is that you want a direct 1:1 copy of the original data, just in a different format.

You use Resolves Soft Clip LUT generator (on the Look Up Tables settings page) to create a 1D LUT with a Black Clip of 64 and a White Clip of 1019. This LUT is then applied as a 1D Output LUT. If you are using an existing output LUT (1D or 3D) then you can use the Soft Clip LUT generator to make a modified version of that existing LUT, adding the 64 and 1019 clip levels.

 So what is it doing?

As you are working at Data Levels your clips and footage will come in to Resolve 1:1. So a clip with a range of CV0-1023 will come in as CV0-1023, a CV64-940 clip will come in with CV64-940 and a CV64-1019 clip as CV64-1019. Most video clips from a modern camera will use CV64-1019.  A clip using CV64-1019 will be imported and handled as CV64-1019 within the full 0-1023 range, but the levels are not shifted or altered so if it’s CV220 in the original it will be CV220 inside Resolve. One immediate benefit is that Resolves scopes are now showing the actual original levels of the source clip, as shot. Phew – that’s a lot of CV’s in that paragraph, hope your following along OK.

You grade your footage as normal. The Soft Clip LUT will clip anything below CV64 (0 IRE, video black) but allow the full extended video range up to CV1019(1023) to be used. It won’t shift the level, just not allow anything to go below CV64. If grading for output do ensure that you really do want extended range (If you want to stay broadcast safe use video range).

The output to your HDSDI monitor will be unscaled data CV0-1019, but because of the LUT clipping at 64, there will be nothing below 64, no super blacks, this is how it should be, this is correct and what you want for an extended range workflow, perhaps for passing your footage on to another video editing application for finishing or where it will be mixed with other full range footage. The majority of grading workflows however will probably be conventional Legal Video Range.

When you render a file using data levels, the file will go from CV0-1019 but again because of the Soft Clip LUT there will be nothing below 64 (black) but you can use the Full Range above CV940 so super whites etc will be passed through correctly to the rendered file. This way you can make use of the complete extended video range.

 In Summary:

If you want to use Resolve to convert files from one codec to another, without changing your levels you must ensure the Clip Attributes are set to Data, your monitor out must be set to Data Levels and you must Render using Data Levels. If you don’t there is a very high likelihood that your levels will be incorrect or altered, almost certainly different to what you shot.

If you wish to grade and output anything above 100IRE (perhaps when mixing graded footage with full range camera footage) then again you must use data levels throughout the workflow but you should add a Soft Clip LUT with CV1019 as the upper clip and CV64 as the lower clip to prevent illegal black levels but retain full video range to 109IRE.

It would be so much simpler if Resolve had an extended range video out option.

What causes CA or Purple and Blue fringes in my videos?

Blue and purple fringes around edges in photos and videos are nothing new. Its a problem we have always had. telescopes and binoculars can also suffer. It’s normally called chromatic aberration or CA. When we were all shooting in standard definition it wasn’t something that created too many issues, but with HD cameras and 4K cameras it’s a much bigger issue because as you increase the resolution of the system (camera + lens) generally speaking, CA becomes much worse.

As light passes through a glass lens the different wavelengths that result in the different colours we see are diffracted and bet by different amounts. So the point behind the lens where the light comes into sharp focus will be different for red light to blue light.

A simple glass lens will bend red, green and blue wavelengths by different amounts, so the focus point will be slightly different for each.
A simple glass lens will bend red, green and blue wavelengths by different amounts, so the focus point will be slightly different for each.

The larger the pixels on your sensor the less of an issue this will be. Lets say for example that on an SD sensor with big pixels, when the blue light is brought to best focus the red light is out of focus by 1/2 a pixel width. All you will see is the very slightest red tint to edges as a small bit of out of focus red spills on to the adjacent pixel. Now consider what happens if you increase the resolution of the sensor. If you go from SD to HD the pixels need to made much smaller to fit them all on to the same size sensor. HD pixels are around half the size of SD pixels (for the same size sensor). So now that out of focus red light that was only half the width of an SD pixel will completely fill the adjacent pixels so the CA becomes more noticeable.

In addition as you increase the resolution of the lens you need to make the focus of the light “tighter” and less blurred to increase the lenses resolving power. This has the effect of making the difference between the focus points of the red and blue light more distinct, there is less blurring of each colour, so less bleed of one colour into the other and as a result more CA as the focus point for each wavelength becomes more distinct. When each focus point is more distinct the difference between the in focus and out of focus light becomes more obvious, so the colour fringing becomes more obvious.

This is why SD lenses very often show less CA than HD lenses, a softer more blurry SD lens will have less distinct CA. Lens manufacturers will use exotic types of glass to try to combat CA. Some types of glass have a negative index so blue may focus closer than red and then other types of glass may have a positive index so red may focus closer than blue. By mixing positive and negative glass elements within the lens you can cancel out some of the colour shift. But this is very difficult to get right across all focal lengths in zoom lenses so some CA almost always remains. The exotic glass used in some of the lens elements can be incredibly expensive to produce and is one of the reasons why good lenses don’t come cheap.

Rather than trying to eliminate every last bit of CA optically the other approach is to electronically reduce the CA by either shifting the R G B channels in the camera electronically or reducing the saturation around high contrast edges. This is what ALAC or CAC does. It’s easier to get a better result from these systems when the lens is precisely matched to the camera and I think this is why the CA correction on the Sony kit lenses tends to be more effective than that of the 3rd party lenses.

Sony recently released firmware updates for the PMW200 and PMW300 cameras that improves the performance of the electronic CA reduction of these cameras when using the supplied kit lenses.

Sony PMW-F5 and F55 to get ProRes and DNxHD codecs.

In a very welcome announcement today, Sony have stated as part of their on-going commitment to making the F5 and F55 cameras as versatile and flexible as possible and following customer feedback there will be a hardware upgrade option for both cameras that will allow you to add the popular ProRes and DNxHD codecs to the internal recording options.

This is great news, although I have to say I really like the XAVC codec for acquisition, as it will really help those still using FCP7 and provide the cameras with a codec for just about every possible scenario.

No details of when, or how much, but great news. You can find a few more details here: http://community.sony.com/t5/F5-F55/Announcing-ProRes-and-DNxHD-support-for-both-F5-and-F55/m-p/293988#U293988

Premiere Pro CC now supports Sony Raw – WITHOUT the Sony Plug-In.

I was having “Media Pending” issues with Sony raw footage in Premiere on my mac. I did some digging and it appears that in the last update to Premiere Pro CC (version 7.2) Adobe included native support for Sony raw at 4K, 2K and HFR. If you are running Premiere CC and you still have the Sony Raw Plug-In installed it makes Premiere unresponsive and will result in a lot of “Media Pending” messages when you try to work with Raw footage. After removing the Sony raw importer plug-in all my media pending issues went away and I can now use 2K HFR footage in Premiere CC.

To uninstall the plugin on a Mac (called “ImporterSonyRawBundle”)  go to “Applications” then “Adobe Premiere Pro CC” and then right click on the “Adobe Premiere Pro CC” app file and select “Show Package Contents” then open “Content” and then “Plug-ins” you should find the file you need to trash.

Storm Chasing Tour and Workshop, 2014.

Me shooting a tornado with the PMW-F5 and AXS-R5 on my Miller Solo tripod.
Me shooting a tornado with the PMW-F5 and AXS-R5 on my Miller Solo tripod.

So just there’s just a little over 3 months to go until my annual storm chasing expedition to the USA. Last years trip was an amazing success with some incredible storms and tornadoes captured on video and stills. This years trip is 11 days long to maximise the chances of seeing jaw dropping weather.

Please understand that I am not going to be trying to get as close as possible to a tornado or put my life or anyone else’s life in danger. This trip is about capturing dramatic and beautiful images of the Great Plains of the USA along with the severe storms that are common in spring. If you have seen any of my storm videos you will know that the structure of some of the storms that I expect to witness are incredible. The lightning shows breath taking and the scenery impressive. If you are a scenic or landscape photographer or videographer then this is a trip for you.

Dramatic Supercell Thunderstorm
Dramatic Supercell Thunderstorm

During the trip I will be on hand to share my knowledge and help you improve your photo and video skills. We will have a motion control rig for time-lapse, 4K cameras and all kinds of other cool gadgets to play with.

More details of the trip can be found here: http://www.xdcam-user.com/tornado-chasing/  . Please not that spaces are extremely limited, so it’s first come, first served.

If you really want to make the most of your trip then why not join me a few days early in Austin, Texas where I will be running a music video production workshop at Omega Broadcast on the 22/23 of May, more details of this to follow soon. After the storm chasing tour it’s Cinegear in LA on the 6/7th of June.

The Cape Peninsular shot on an F55 with two lenses.

Here is a short selection of a few clips that I shot around the Cape Peninsular while waiting for my flight home after some workshops. Shot on an F55 with a 20mm and 85mm Sony PL lens. Most of it is 2k raw at 240fps, but there are some normal speed shots in there too as well as some S&Q motion time-lapse. Big thank you to Charles Maxwell for taking the time out to drive me around the Peninsular.

ACES: Try it, it might make your life simpler!

ACES is a workflow for modern digital cinema cameras. It’s designed to act as a common standard that will work with any camera so that colourist can use the same grades on any camera with the same results.

A by-product of the way ACES works is that it can actually simplify your post production workflow as ACES takes care of an necessary conversions to and from different colour spaces and gammas. Without ACES when working with raw or log footage you will often need to use LUT’s to convert your footage to the right output standard. Where you place these LUT’s in your workflow path can have a big impact on your ability to grade your footage and the quality or your output. ACES takes care of most of this for you, so you don’t need to worry about making sure you are grading “under the LUT” etc.

ACES works on footage in Scene Referred Linear, so on import in to an ACES workflow conventional gamma or log footage is either converted on the fly from Log or Gamma to Linear by the IDT (Input Device Transform) or you use something like Sony’s Raw Viewer to pre convert the footage to ACES EXR. If the camera shoots linear raw, as can the F5/F55 then there is still an IDT to go from Sony’s variation of scene referenced linear to the ACES variation, but this is a far simpler conversion with fewer losses or image degradation as a result.

The IDT is a type of LUT that converts from the camera’s own recording space to ACES Linear space. The camera manufacturer has to provide detailed information about the way it records so that the IDT can be created. Normally it is the camera manufacturer that creates the IDT, but anyone with access to the camera manufacturers colour science or matrix/gamma tables can create an IDT. In theory, after converting to ACES, all cameras should look very similar and the same grades and effects can be applied to any camera or gamma and the same end result achieved. However variations between colour filters, dynamic range etc will mean that there will still be individual characteristics to each camera, but any such variation is minimised by using ACES.

“Scene Referred” means linear light as per the actual light coming from the scene. No gamma, no color shifts, no nice looks or anything else. Think of it as an actual measurment of the true light coming from the scene. By converting any camera/gamma/gamut to this we should be making them as close as possible as now the pictures should be a true to life linear representation of the scene as it really is. The F5/F55/F65 when shooting raw are already scene referred linear, so they are particularly well suited to an ACES workflow.

Most conventional cameras are “Display Referenced” where the recordings or output are tailored through the use of gamma curves and looks etc so that they look nice on a monitor that complies to a particular standard, for example 709. To some degree a display referenced camera cares less about what the light from the scene is like and more about what the picture looks like on output, perhaps adding a pleasing warm feel or boosting contrast. These “enhancements” to the image can sometimes make grading harder as you may need to remove them or bypass them. The ACES IDT takes care of this by normalising the pictures and converting to the ACES linear standard.

After application of an IDT and conversion to ACES, different gamma curves such as Sony’s SLog2 and SLog3 will behave almost exactly the same. But there will still be differences in the data spread due to the different curves used in the camera and due to differences in the recording Gamut etc. Despite this the same grade or corrections would be used on any type of gamma/gamut and very, very similar end results achieved. (According to Sony’s white paper, SGamut3 should work better in ACES than SGamut. In general though the same grades should work more or less the same whether the original is Slog2 or Slog3).

In an ACES workflow the grade is performed in Linear space, so exposure shifts etc are much easier to do. You can still use LUT’s to apply a common “Look” to a project, but you don’t need a LUT within ACES for the grade as ACES takes care of the output transformation from the Linear, scene referenced grading domain to your chosen display referenced output domain. The output process is a two stage conversion. First from ACES linear to the RRT or Reference Rendering Transform. This is a very computationally complex transformation that goes from Linear to a “film like” intermediate stage with very large range in excess of most final output ranges. The idea being that the RRT is a fixed and well defined standard and all the complicated maths is done getting to the RRT. From the RRT you then add a LUT called the ODT or Output Device Transform to convert to your final chosen output type. So Rec709 for TV, DCI-XYZ for cinema DCP etc. This means you just do one grading pass and then just select the type of output look you need for different types of master.

Very often to simplify things the RRT and ODT are rolled into a single process/LUT so you may never see the RRT stage.

This all sounds very complicated and complex and to a degree what’s going on under the hood of your software is quite sophisticated. But for the colourist it’s often just as simple as choosing ACES as your grading mode and then just selecting your desired output standard, 709, DCI-P3 etc. The software then applies all the necessary LUT’s and transforms in all the right places so you don’t need to worry about them. It also means you can use exactly the same workflow for any camera that has an ACES IDT, you don’t need different LUT’s or Looks for different cameras. I recommend that you give ACES a try.

TV Logic VFM-058W 1920 x 1080 compact monitor.

TV-Logic VFM-058W monitor on set.
TV-Logic VFM-058W monitor on set.

I’m a big fan of TV Logics monitors. I’ve been using a TV Logic 056W as my primary on camera monitor for some time and it’s been a good solid and reliable workhorse. One of the great things about it is it’s weight, it’s extremely light, which makes it very simple to mount on the camera.

TV Logic have now released a new monitor, very similar to the 056W. The new monitor is just a shade smaller at 5.5″ but now features a full 8 bit 1920 x 1080 resolution panel.  At first I was sceptical that I would see any benefit from this high resolution on such a small screen, but I have been pleasantly surprised, I can see noticeably more detail in my images thanks to the screens higher resolution. This makes focus checking easier, especially when shooting in 4K.

TV Logics VFM-058W showing waveform display.
TV Logics VFM-058W showing waveform display.

The 058W’s feature set is very similar to the 056W. Inputs are 3G HDSDI and HDMI. There is also an HDMI out, very handy for feeding another monitor if you only have a single HDMI or SDI out on the camera. There are the usual Waveform and Vectorscope displays for exposure control along side luma zone (a kind of false colour) and over range error checking . There is coloured peaking (focus assist) to help with critical focus as well as an always handy zoom mode that allows you to zoom in to the picture for focus checking. There are various underscan/overscan viewing modes along with all the usual aspect ratio markers and safe area overlays.

DSLR shooters are also taken care of thanks to the monitors ability to take  the output from a DSLR and scale the image so that it fills the screen.

One feature I really like is the way the monitors 3 assignable function buttons work. Instead of having to go into the menus to assign your desired functions to the buttons, all you have to do is press and hold the button. After a couple of seconds a drop down menu appears with all the available options, then you simply scroll to the function you want with the scroll wheel and press the wheel to select. It’s fast and simple to change the assigned function as your needs change.

As well as providing a sharp and clear image, the monitors colours are also nice and accurate. You can even use TV-Logics calibration utility and a measuring probe to full calibrate the display and save the calibration settings as a LUT in the monitor. I’d really like to investigate the monitors LUT capabilities as this could prove very useful when shooting with Log.

There is a built in speaker for audio monitoring as well as a 3.5mm headphone jack. If you want you can monitor your audio levels via on screen audio meters.

The 058W is not quite as light as the 056W as it has a tough looking magnesium alloy housing, but it’s still nice and lightweight. It also lacks the analog input that the 056W has, although to be honest I’ve never used that feature on my 056W. The higher resolution screen is very nice, the new button layout (all along the top of the monitor) is a big improvement over the 056W and overall this monitor feels a little more robust (although my 056W has been all over the world with anything breaking). With 14″ threads on all 4 sides, mounting is easy, so the VFM-058W will now replace my 056W as my on camera monitor.

Well done TV-Logic. Another really good monitor.

Cinematographer and film maker Alister Chapman's Personal Website