26th March, Suomilammi Helsinki. PMW-F5 and PMW-F55 workshop looking at the cameras, workflow, 4K and raw with hands on time and end to end workflow demos.
8th – 11th April, NAB, Las Vegas. Daily one hour overview of the F5/F55 looking at 4K, color space, raw and workflow.
I’m also looking into digital imaging workshops in New York, Austin (Texas) and LA in late May. These will cover things like 4K, gamma, log, raw and camera setups as well as general filming and production techniques. If your interested let me know.
These are the events and workshops I will be running or attending in the next couple of weeks in the UK. Note that these are free to attend, please follow the links for more details:
20th March, H.Prestons Open House in Malvern. I’ll have a full F5 kit, FS700 kit plus a large number of options for everyone to take a look at plus there will be free advice and tutorials on any Sony cameras.
OK, OK, many of you will know this already, but for those that don’t understand what raw is all about I’m going to try to explain.
First lets consider how conventional video is recorded. When TV was first invented back in the late 1930’s a way was needed to squeeze a signal with a large dynamic range into a sensible sized signal. One important thing to consider and remember (if this article is going to make any sense) is that each additional stop of exposure has double the brightness of the previous stop. This doubling of brightness translates into a doubling of the bandwidth or data required to transmit or store it. With a limited bandwidth system like TV broadcasting, if nothing was done to reduce the bandwidth required by ever brighter stops then you would only be able to broadcast a very narrow brightness or dynamic range.
Our own visual system is tuned to pay most attention to shadows and mid tones. After all, if anything was going to eat our ancient ancestors it was most likely going to come out of the shadows. In addition the things most important to us tend to be faces, plants and other things that are visually in the mid range. As a result we tend not to notice highlights and brighter parts of the world. So, if you take a picture or a video and reduce the amount of bandwidth or data used for the highlights we don’t tend to notice it in the same way that we would notice a reduction of data in the mid range. In order to keep video transmission and storage bandwidths under control something called a gamma curve is applied to recordings and broadcasts. This gamma curve gradually reduces the amount of bandwidth/data used as the brightness of the image increases. Gamma is a form of video compression and as with almost all types of video compression you are throwing away picture information. For the darker parts of the picture there is almost no compression, while the brighter parts, especially the highlights are highly compressed. For more info on Gamma take a look at Wikipedia.
So that’s gamma and gamma is used by all conventional video cameras. The problem with gamma is that if you overexpose an image, lets say a face, you push that face up into the more compressed part of the exposure range and this starts to get quite noticeable. Even though in post production you can reduce the brightness of the overexposed face it will still often not look right because of the extra compression imparted on the face and subsequent loss of data due to the over exposure.
To make matters worse, when your working with conventional video you have a very limited amount of bandwidth (think of it as a fixed size bucket) within which you must store all you picture information. Try to put too much information into that bucket and it will overflow. As the dynamic range of modern cameras increases we end up trying to squeeze ever greater amounts of picture information into that same sized bucket. The only way we can fit more stops into our fixed size bucket is by compressing the highlights even more. This means that the recording system becomes even less forgiving of over exposure. It’s a bit of a catch 22 situation: A camera with a greater dynamic range will often be less tolerant of incorrect exposure than a camera with a smaller dynamic range (and thus less highlight compression).
But what if we could do away with gamma curves altogether? Well if we could do away with gamma curves then our exposure would be less critical. We could over expose a face and provided it wasn’t actually clipped (exceeding peak white) it could be corrected down to the right brightness in post production and it would look just fine. This would be fantastic, but the amount of data you would need to record without gamma would be massive.
Enter the Bayer Pattern sensor! Raw can work with any type of sensor, but it’s Bayer type sensors that we normally associate with raw. A Bayer sensor is a single sensor with a special array of coloured filters above the pixels that allow it to reproduce a colour image. It’s important to remember that the pixels themselves are just light sensitive devices. They do not care what colour light falls on them. They just output a brightness value depending on how much light falls on them. If we take a pixel with a green filter above it, only green light will fall on the pixel and the pixel will output a brightness value. But the signal is still just a brightness value, it is not a colour signal. It does not become a colour signal until the output from the sensor is “De-Bayered”. De-Bayering is the process of taking all those brightness values from the pixels and converting them into a colour video signal. So again taking green as an example, we read out the first pixel (top left) and as this was behind a green filter we know that it was seeing green light. For the next pixel we know it was under a blue filter, but we still need a green value for our final picture. So we use the green pixels adjacent to the blue one to calculate an estimated green value for that location. This process is repeated for all 3 primary colours for every pixel location on the sensor. This gives us a nice colour image, but also creates a lot of data. If we started off with 4096×2160 pixels (4K sensor) we would initially have 8.8 Million data samples to record or store. However when we convert this brightness only information to RGB colour we get 4096×2160 of green, 4096×2160 of blue and 4096×2160 of red. A whopping 26.5 Million data samples. A traditional video camera does all this De-Bayering prior to recording, but what if we skipped this process and just recorded the original sensor brightness samples? We could save ourselves a huge amount of data.
The other thing that normally happens when we do the De-Bayering etc is that we make adjustments to the De-Bayered signal levels to allow for things like white balance and camera gain. Adjusting the gain or white balance of a camera does not change the way the sensor works. The same amount of light falls on the same pixels and the output of the sensor does not change. What we change is the proportions of Red Green and Blue that we mix together to get the correct white balance or we add additional amplification to the signal (like turning up the volume on an audio amplifier) adding gain, to make the picture brighter.
Raw just records the unprocessed sensor output.
So if we just record the raw data coming off the sensor we dramatically reduce the amount of data we need to record. As the recorded signal won’t be immediately viewable anyway (it will need to be de-Bayered first), we don’t need to use a gamma curve. As the amount of data is lower than it would be for a fully processed colour image we can actually record the data linearly without image compression. The downside is that to view the recorded image we must process and De-Bayer the image while it’s playing back. The plus side is that at the same time as De-Bayering we can add our colour balance adjustment and any gain we need, all of this can be done in the edit suite giving much finer control and the ability to correct it and re-do it if you want. What we are doing is moving the in image processing from in the camera, to in the edit suite. In addition there is the fact that the picture is linear, without gamma compression which makes it incredibly forgiving of overexposure.
If you have never worked with raw then I suggest you give it a try. Many stills cameras can shoot in raw and it’s essentially exactly the same process with a stills camera as a video camera. If you have a camera that will do both Jpeg and raw at the same time have a go at shooting with both modes and then adjusting both in a paint package like photoshop. The difference in post production flexibility is astounding.
Of course as with all these things there is no free lunch. Your still recording a lot of data with linear raw so your recorded files will be much larger than traditional compressed video. In addition De-Bayering and processing the images takes time. Modern computers are getting faster and storage is getting cheaper, working with raw is easier now than it’s ever been. I can work with the 4K raw files from my laptop (Retina MacBook Pro) in real time by using 1/4 resolution playback. The final renders from Resolve do take a little bit of time, but once you’ve taken a bite from the raw apple it will keep tempting you back for more!
Going to NAB? Got a week spare afterwards and fancy something different? Why not join me on a Storm Chasing adventure. I will be taking a very small group tornado chasing between April 13th and April 20th. This is the start of the tornado season in the USA and in recent years this time of year has seen some very big tornado outbreaks. Now unlike the typical thrill seeker tornado trips that try to get as close as possible to the tornadoes my aim is to get into the best positions for beautiful and awe inspiring shots of the storms and tornadoes. This may mean hanging back just a little bit to give ourselves time to get tripods out and get stable properly exposed high quality video and stills. I have absolutely no desire to actually get into a tornado or be so close that you can never stop.
In addition I will be looking for opportunities to capture time-lapse of developing storms, beautiful storm structures and spectacular lightning. It should be noted however that sometimes, in order to get a decent view of a tornado we may need to get quite close, but I will not deliberately enter into poor visibility or any other high risk situation.
As part of the trip I will provide tuition and assistance for anyone that needs it. We can look at camera set-ups, picture profiles, time-lapse techniques and any other aspect of video production. The cost of the trip is $1,900 USD which includes accommodation. We normally stay in mid budget motels. Food and drink is not included and you will need to make your own way to/from Dallas, Texas, arriving in Dallas on the 13th of April and Departing Dallas on the 21st of April.
What can you expect to see? Well, there are no guarantees. We are at the mercy of the weather, but it is tornado season. I would expect to see impressive “Supercell” thunderstorms that twist and turn, towering from a base at 1,500ft all the way up to 70,000 ft. I would expect to see spectacular lightning from these storms both from cloud to ground as well as across the vast spreading anvils of these storms. I would not be at all surprised to encounter large hail, maybe golfball or bigger (although I try to avoid any direct encounters with hail bigger than golf balls). There may be haboob dust storms, damaging straight line winds and if we are lucky tornadoes.
Where will we go? Who knows, the only thing I do know is that we will start and finish in Dallas. I will make a daily weather forecast and we will go to the area that offers the best prospect of seeing storms. In April this usually means travelling around the states of New Mexico, Texas and Oklahoma, but it would not surprise me if we end up in Kansas, Colorado, Nebraska or Iowa. There is often a lot of driving, but that is part of the adventure, a road trip across the mid-west seeing small western towns, the vast prairies and cattle ranches.
If your interested please use the contact form to send me a message.
For a zoom lens to be Parfocal, that is to stay in focus as you zoom in or out, the distance between the sensor and the rear element of the lens has to be set very accurately. If it is not then the focus will shift as you zoom in or out. This is why on most pro video cameras or lenses there is a back focus or Flange Back adjustment that alters this distance over a very small range, often only around +/- 0.5mm.
With lenses that are electronically controlled, like the one on the PMW-200/EX1 it is more complex. The lens itself is not ParFocal, the lenses natural focus changes as you zoom. This makes the design of the lens simpler and thus cheaper as well as compact and light weight. But because of this the camera/lens must use a Look up table of focal length to desired focus distance to dynamically alter the focus as you zoom to make the non-parfocal lens ( called vari-focal) behave like a parfocal one. This table needs to be calibrated from time to time, especially if the lens has been bumped or knocked (even when not in use) and in the case of the PMW-200, EX1 and EX3 (plus other similar cameras) this is what the Auto FB adjust routine does.
If you find that when zooming in and out your focus is not tracking accurately you may need to run the Auto FB routine to calibrate your lens. Sometimes rough handling of the camera, for example in transit, can throw out the lenses calibration.
Hot of the show floor from CabSat is a great new upgrade for the Alphatron 035W viewfinder. The firmware for the viewfinder has been updated to include a waveform monitor and vectorscope. The size of these can be adjusted so you can have a small inset waveform in the bottom left of the screen or a much larger waveform across the bottom of the screen. This is a great upgrade (especially for anyone think of using it with an F5/F55) and best of all it can be applied to any Alphatron EVF. I believe this is available free of charge to anyone that has an Alphatron EVF which is even better.
There are also some hardware changes which includes a new optic in the monocular that combined with a new filter and protection layer on the LCD screen means that sun damage is now extremely unlikely even if you don’t close the shutter. Lots of good news coming from Alphatron!
In response to Reds attempt to sue Sony over claimed patent infringements Sony have made the following statement:
On February 12, 2013, Red Digital Cinema (“Red”) sued Sony Corporation of America and Sony Electronics Inc. and alleged that the Sony PMW-F5, PMW-F55, and F65 digital cinema cameras infringe two Red patents. The F65 has been commercially available for over a year and the F5 and F55 were announced in October, 2012.
Sony has now had an opportunity to study Red’s complaint and the asserted patents, and categorically denies Red’s allegations. Sony intends to defend itself vigorously in the Red lawsuit. Sony looks forward to prevailing in court, thus vindicating the Sony engineers who developed Sony’s quality digital cinema cameras.
Taken from http://pro.sony.com/bbsc/ssr/show-highend/
Sensor technology right now has not really changed for quite a few years. The materials in sensor pixels and photo-sites to convert photons of light into electrons are pretty efficient. Most manufacturers are using the same materials and are using similar tricks such as micro lenses to maximise the sensors performance. As a result low light performance largely comes down to the laws of physics and the size of the pixels on the sensor rather than who makes it. If you have cameras with the same numbers of pixels per sensor chip, but different sized sensors, the larger sensors will almost always be more sensitive and this is not something that’s likely to change in the near future. It hasn’t actually changed for quite a few years now.
Both on the sensor and after the sensor the camera manufacturers use various noise reduction methods to minimise and reduce noise. Noise reduction almost always has a negative affect on the image quality. Picture smear, posterisation, a smoothed plastic like look can all be symptoms of excessive noise reduction. There are probably more differences between the way different manufacturers implement noise reduction than there are differences between sensors.
The less noise there is from the sensor the less aggressive you need to be with the noise reduction and this is where you really start to see differences in camera performance. At low gain levels there may be little difference between a 1/3″ and 1/2″ camera as the NR circuits cope fairly well in both cases. But when you start boosting the sensitivity by adding gain the NR on the small sensor camera has to work much harder than on the larger sensor camera. This results in either more undesirable image artefacts or allowing more noise to be visible on the smaller sensor camera. So when faced with challenging low light situations, bigger will almost always be better when it comes to sensors. In addition dynamic range is linked to noise as picture noise limits how far the camera can see into the shadows, so generally speaking a bigger sensor will have better dynamic range. Overall camera real camera sensitivity has not changed greatly in recent years. Cameras made with one size of sensor made today are not really any more sensitive than similar ones made 5 years ago. Of course the current trend for large sensor cameras has meant that many more cameras now have bigger sensors with bigger pixels and these are more sensitive than smaller sensors, but like for like, there has been little change.
I have a few shoots and projects coming up that require a very portable setup with little to no time to use a light meter etc (tornado chasing next month – anyone want to join me??). Currently the metering and measurement options on the PMW-F5 and F55 are limited to zebras and the zebras don’t go down below 50%. I’m going to be shooting 4K raw, so the camera will be in S-Log2. I can use a LUT to display a S-Log to 709 image in the viewfinder, but this makes it hard to appreciate the full range of what the camera is capturing. When shooting a dark storm against a bright sky the dynamic range of the scene can be massive, so I like to see the native image rather than via a LUT to help judge over exposure a bit more accurately. When I’ve done this before, as an exposure tool, I’ve taped a grey card to the car so if I need a quick exposure reference I can point the camera at the card and in the case of the PMW-F3 use the centre spot meter to get a quick exposure guide. The issue is that for S-Log2 middle grey should be approx 34%, so zebras that only go to 50% are not much use. I can use white as an alternative, which should fall around 68% but it’s not ideal. Anyway, I was exploring various options when I remembered that my Alphatron EVF had zebras that could easily go down to 34%. So I decided to check out the Alphatron on my F5 as an alternative to my Sony L350. Both LCD panels have similar resolution, so it was interesting to compare them anyway.
The Sony L350 EVF is a very nice viewfinder, but it’s not cheap, running at around £2K/$3K (although that does include the mount). It has very good contrast and resolution that is high enough that you can’t see the pixels (just) when you look through the monocular. It’s also very versatile as the monocular flips up, both towards the rear and side.
The Alphatron EVF-035W-3G is also a very nice viewfinder, but at half the price of the Sony is considerably cheaper. It only opens up to the rear, but it does incorporate a very handy shutter in the loupe that when closed will prevent sun damage to the LCD screen. Interestingly both viewfinders specify the same 960×540 half HD resolution and contrast ratios of 1000:1. One side note: If you want a rubber eye cup with a set of rubber blades that open as you put your eye against the eyepiece to prevent the sun from damaging your expensive viewfinder, BandPro sell them for about $160 each.
Back to the viewfinders….. So how different are they? Well to be honest not very different. My Alphatron is an old pre-production one, so may be very slightly different to a production unit. Looking into the viewfinder loupe the image in the Aphatron is considerably larger than the Sony, you can just see the pixels in the Alphatron, but not in the Sony. This is simply due to the greater magnification from the optics in the Alphatron. The screen sizes and resolutions are the same. I think the Sony optics are a little better with less aberrations and distortion, but the viewed image is much smaller. When focussing I found both to provide similar performance, I could focus equally well with both viewfinders, if anything the Alphatron has a slight edge due to the larger image, but it’s a close call.
You can zoom in pixel to pixel on both viewfinders, both viewfinders have peaking, possibly marginally better on the Sony, but again really not a great deal of difference. Interestingly the Sony peaking system works on vertical edges while the Alphatron appears to favour horizontal.
Contrast, brightness, colour and smear wise both EVF’s are again very similar, maybe the Sony is just a little better on contrast. I think I might need to calibrate my the colours on my Alphatron slightly, this is easy enough in the menus. I do suspect that they are both using the same LCD panel. Powering and feeding the Alphatron is simple enough, I used a D-Tap to TV-Logic power adapter cable for this test and then took an SDI feed from the Sub SDI bus. But you could also use one of the Aux power outputs on the V-Mount adapter or R5 to power the Alphatron only when the camera is on.
There you have it – The Alphatron 035W EVF is a legitimate option for use with the PMW-F5 and F55. The ability to use the Zebras to measure S-Log2 middle grey is a nice bonus, in addition you have other exposure tools such as false colour, oh if only I had these with the Sony EVF! I’m going to have to think long and hard about this. If I had thought about it sooner I could have saved myself £2K by not getting the Sony EVF and using the Alphatron that I already owned. Where possible I will use my TV-Logic 056W monitor (see my review of this great monitor here) with it’s built in waveform display for accurate exposure assessment, but sometimes it’s not practical to have a 5.6″ monitor hanging off the side of the camera and in this situation the extra exposure tools of the Alphatron will be very handy. One last thing, if you are thinking of going down the Alphatron EVF route, do remember you will need a bracket of some kind. The F5/F55’s handle has plenty of 3/8″ and 1/4″ threads, plus there are a few on the top of the camera body, so lots of options. I have the Element Technica Micron top plate and handle and I used a bracket from this. ET do make a dedicated mount for the Alphatron finder that is very nice.
Cinematographer and film maker Alister Chapman's Personal Website