I had one of Juice Designs base plates on my EX1 and it was fantastic. It never came off the camera and gave me the confidence that any loads exerted on the base of the camera were spread over the entire EX1 base rather than the weedy single 1/4″ screw hole on the bottom of the EX1. When the EX1R arrived things were somewhat improved as now instead of one weedy 1/4″ thread there are two weedy 1/4″ threads. Great you think, two is better than one, but the little postage stamp sized plate that has the tripod mounting holes is attached to the EX’s chassis by 4 teeny tiny screws. These have a tendency to work loose over time and can break quite easily if over stressed. If you really load up the tripod mount you can fracture the casting or worse still the chassis of the camera. If all that isn’t bad enough the other issue I have with my EX1R in particular is that the tripod mount casting is very slightly proud of the base of the camera, so when its on a tripod (or my 3D rig) it wobbles about quite a bit as there is only about 1 square inch of metal in contact with the tripod. Clearly none of these are desirable and that’s where the Juice designs base plates come to the rescue. The EX1 version is attached to the base of the camera in four places. The obvious 2 are the normal pair of 1/4″ tripod threads. The other two are a couple of small screws that normally hold some of the plastic camera body parts on to the chassis. By spreading any loads across much more of the base of the camera the tripod to camera interface will be much stronger. In addition the camera is now rock solid on my tripod and 3D rig. If anyone is thinking of using an EX1 or EX1R on a 3D rig a base plate like this is not an option, it is absolutely essential!
The base plate is machined from a single piece of aluminium and anodised black. It is clearly a well though out design with nice curves that make follow the contours of the camera, indeed it looks like it really is park of the camera. It even has small recesses in it to clear some of the lumps and bumps that are on the base of the EX1R. At the rear of the base plate there is a small cut out area that allows you to add an optional bolt on accessory arm or “wing”. You can use the arm to attach devices such as the NanoFlash or Radio Mic receiver. The arm is supplied with a cold shoe mount which can be attached in a variety of positions making it very flexible indeed.
It took just minutes to fit the plate. It comes with all the screws that you need plus a couple of allen keys. You will need a small jewellers screwdriver to remove two small screws from the base of the EX1R. This is a high quality product that should help protect you investment and make the camera more stable, so highly recommended. See Juice Designs web site for more information.
The new Sony F3 will be landing in end users hands very soon. One of the cameras upgrade options is a 4:4:4 RGB output, but is it really 4:4:4 or is it something else?
4:4:4 should mean no chroma sub-sampling, so the same amount of samples for the R, G and B channels. This would be quite easy to get with a 3 chip camera as each of the 3 chips has the same number of pixels, but what about a bayer sensor as used on the F3 and other bayer cameras too for that matter?
If the sensor is subsampling the aerial image B and R compared to G (Bayer matrix, 2x G samples for each R and B) then no matter how you interpolate those samples, the B and R are still sub sampled and data is missing. Potentially depending on the resolution of the sensor even the G may be sub sampled compared to the frame size. In my mind a true 4:4:4 system means one pixel sample for each colour at every point within the image. So for 2k that’s 2k R, 2K G and 2K B. For a Bayer sensor that would imply a sensor with twice as many horizontal and vertical pixels as the desired resolution or a 3 chip design with a pixel for each sample on each of the R,G and B sensors. It appears that the F3’s sensor has nowhere near this number of pixels, rumour has it at around 2.5k x 1.5k.
If it’s anything less than 1 pixel per colour sample, while the signal coming down the cable may have an even number of RGB data streams the data streams won’t contain even amounts of picture information for each colour, the resolution of the B and R channels will be lower than the Green, so while the signal might be 4:4:4, the system is not truly 4:4:4. Up-converting the 4:2:2 output from a camera to 4:4:4 does not make it a 4:4:4 camera. This is no different to the situation seen with some cameras with 10 bit HDSDI outputs that only contain 8 bits of data. It might be a 10 bit stream, but the data is only 8 bit. It’s like a TV station transmitting an SD TV show on an HD channel. The channel might call itself an HD channel, but the content is still SD even if it has been upscaled to fill in all the missing bits.
Now don’t get me wrong, I’m not saying that there won’t be advantages to getting the 4:4:4 output option. By reading as much information as possible from the sensor, prior to compression there should be an improvement over the 4:2:2 HDSDi output, but it won’t be the same as the 4:4:4 output from an F35 where there is a pixel for every colour sample, but then the price of the F3 isn’t the same as the F35 either!
With more and more people using 35mm size sensors, more of the old traditional filming styles and techniques are trickling down from the high end to lower and lower production levels. This is a good thing as it often involves slowing down the pace of the shoot and more time being taken over each shot. One of the key things with film is that you can’t see the actual exposure on a monitor as you can with a video camera. A good video assist system will help, but at the end of the day exposure for film is set by using a light meter to measure the light levels within the scene and then you calculate the optimum exposure using the films ISO rating. So what exactly is an ISO rating?
Well it is a measure of sensitivity. It tells you how sensitive the film is to light, or in the case of a digital stills or video camera how sensitive the sensor is to light. Every time you double the ISO number you are looking at doubling the sensitivity. So ISO 200 is twice as sensitive as ISO 100. ISO 1600 is twice as sensitive as ISO 800 etc.
Now one very important thing to remember is that ISO is a measure of sensitivity ONLY. It does not tell you how noisy the pictures are or how much grain there is. So you could have two cameras rated at 800 ISO but one may have a lot more noise than the other. It’s important to remember this because if you are trying, for example, to shoot in low light you may have a choice of two cameras. Both rated with a native sensitivity of 800 ISO but one has twice as much noise as the other. This would mean that you could use gain (or an increased ISO) on the less noisy camera and get greater sensitivity, but with a final picture that is no more noisy than the noisier camera. How does this relate to video cameras?
Well most video camera don’t have an ISO rating, although if you search online you can often find someone that has worked out an equivalent ISO rating. The EX1 is rated around 360 ISO. The sensitivity of a video camera is adjusted by adding or reducing electronic gain, for example +3db, +9db etc. Every 6db of gain you add, doubles the sensitivity of the camera. So taking an EX1 (360 ISO) if you add 6db of gain you double the sensitivity and you double the ISO to 720 ISO, but you also double the amount of noise.
Now lets compare two cameras. The already mentioned EX1 rated at approx 360 ISO and the PMW-350 rated at approx 600 ISO. As you can see from the numbers the 350 is already almost twice as sensitive as the EX1 at 0db gain. But when you also look at the noise figures for the cameras, EX1 at 54db and 350 at 59db we can see that the 350 has almost half as much noise as the EX1. In practice what this means is that if we add +6db gain to the 350 we add +6db of noise so that brings the noise level 53db, very close to the EX1. So for the same amount of noise the 350 is between 3 and 4 times as sensitive as the EX1.
Does your head hurt yet?
There is also a good correlation between sensitivity and iris setting or f-stop. Each f stop represents a doubling or halving of the amount of light going through the lens. So 1 f-stop is equal to 6db of gain, which is equal to a doubling (or halving) of the ISO. You may also hear another term in film circles and that is the T-stop. A T stop is a measured f-stop, it includes not only the light restriction created by the iris but also any losses in the lens. Each element in a lens will lead to a reduction in light and T stops take this into account.
So there you go. The key thing to take away is that ISO (and even the 0db gain setting on a video camera) tells you nothing about the amount of noise in the image. Ultimately it is the noise in the image that determines how much light you need in order to get a decent picture, not the ISO number.
XDCAM cameras have Sony’s Skin Tone Detail Correction system included in the picture profiles. By turning this on you can point the camera at a face (or any other coloured object) and select the hue you want to treat. By using the phase and saturation controls you can adjust the exact hue and hue range that will be treated. Then you can turn the detail level up and down for the selected range.
It works but is a little fiddly to set. I don’t normally use it, instead preferring to shoot with slightly reduce detail level settings overall and then adding a diffusion filter in post production using Magic Bullet or similar. Another option would be to use a diffusion filter or similar on the camera, I like the Tiffen Gold Diffusion/FX for faces. If your budget won’t stretch to that then don’t forget that you can always stretch a very fine mess net over the lens such as a stocking for a pleasing diffusion effect. Again tricky to get just right, if the mesh is too big you’ll see it, too small and you completely blur the image.
So… you want to change the look of the colour in your pictures but are not sure how to do it. One of the first things that you need to understand is the relationship between white balance and the colour matrix. They are two very different things, with two different jobs. As it’s name applies white balance is designed to ensure that whites with the image are white, even when shooting under lighting of different colour temperatures. When you shoot indoors under tungsten lights (you know, the one the EU have decided you can no longer buy) the light is very orange. When you shoot outside under sunlight the light is very blue. Our eyes adjust for this very well, so we barely notice the difference, but an electronic video camera is very sensitive to these changes. When you point a video camera at a white or grey card and do a manual white balance, what happens is that the camera adjusts the gain of the red, blue and green channels to minimise the amount of colour in areas of white (or grey) so that they do in fact appear white, ie with no colour. So the important thing to remember is that white balance is trying to eliminate colour in whites and greys.
The Matrix however deals purely with saturated parts of the image or areas where there is colour. It works be defining the ratio of how each colour is mixed with it’s complimentary colours. So changing the white balance does not alter the matrix and changing the matrix does not alter the white balance (whites will still be white). What changing the matrix will do is change the hue of the image, so you could make greens look bluer for example or reds more green.
So if you want to make your pictures look warmer (more orange or red) overall, then you would do this by offsetting the white balance, as in a warm picture your whites would appear warmer if they are slightly orange. This could be done electronically by adding an offset to the colour temperature settings or by using a warming card, which is a very slightly blue card. If you want to make the reds richer in your pictures then you would use the matrix as this allows you to make the reds stronger relative to the other colours, while whites stay white.
It can get very confusing when trying to compare deferent lenses on different sized sensors as the size of the sensor determines the Field of View. Even if you know the multiplication factors that you need to apply to a lens on a camera with one size sensor to get the equivalent for another, it’s still hard to visualise. Thankfully Andy Shipsides over at Able Cine has come up with a great web page that allows you to see how different scenes will look on different cameras with different focal lengths. It’s a really useful page to add to your bookmarks: http://www.abelcine.com/fov/
As you may be aware this domain and most of my other web sites have been unavailable for some time. This was caused by the total failure of the server my domains resided on. This resulted in the loss of all my data, including my backups. The hosting company was supposed to have been backing up my domains as part of the hosting package, however they had moved my sites to a special server for large domains and I was not aware of this. As a result no backups were kept by the hosting company, again something I was unaware of. I even had a mirror of this domain for safety, but that was lost as well.
Anyway.. long story short.. I’ve lost 3 years of hard work. The moral of this sorry tale is don’t take your web hosts claims of “free backups” for granted. Do your own backups and keep them safe.
Over the next few weeks I will try to salvage what I can from this mess. Even this is taking time as the hosting company has left my domains in such a mess that un-picking them is proving difficult and I seem to be spending more time on the support line than anything else.
Thanks for your support. I hope to get all the key information that used to be on the site back online some time in the future.
This will mean a new layout to the site with feature pages and documentation, which I hope will make it easier to find the things you are looking for. If there are any particular articles that you would like me to try to find or re-write please please post a comment on my blog and I’ll try and find it!
If you were a forum member, you will have to re-register. Please, please post away in the forum. Lets get things back up to speed.
Cinegammas are designed to be graded. The shape of the curve with steadily increasing compression from around 65-70% upwards tends to lead to a flat looking image, but maximises the cameras latitude (although similar can be achieved with a standard gamma and careful knee setting). The beauty of the cinegammas is that the gentle onset of the highlight compression means that grading will be able to extract a more natural image from the highlights. Note than Cinegamma 2 is broadcast safe and has slightly reduced lattitude than CG 1,3 and 4.
Standard gammas will give a more natural looking picture right up to the point where the knee kicks in. From there up the signal is heavily compressed, so trying to extract subtle textures from highlights in post is difficult. The issue with standard gammas and the knee is that the image is either heavily compressed or not, there’s no middle ground.
In a perfect world you would control your lighting (turning down the sun if necessary ;-o) so that you could use standard gamma 3 (ITU 709 standard HD gamma) with no knee. Everything would be linear and nothing blown out. This would equate to a roughly 7 stop range. This nice linear signal would grade very well and give you a fantastic result. Careful use of graduated filters or studio lighting might still allow you to do this, but the real world is rarely restricted to a 7 stop brightness range. So we must use the knee or Cinegamma to prevent our highlights from looking ugly.
If you are committed to a workflow that will include grading, then Cinegammas are best. If you use them be very careful with your exposure, you don’t want to overexpose, especially where faces are involved. getting the exposure just right with cinegammas is harder than with standard gammas. If anything err on the side of caution and come down 1/2 a stop.
If your workflow might not include grading then stick to the standard gammas. They are a little more tolerant of slight over exposure because skin and foliage won’t get compressed until it gets up to the 80% mark (depending on your knee setting). Plus the image looks nicer straight out of the camera as the cameras gamma should be a close match to the monitors gamma.
The graph to the left shows and idealised, normal gamma curve for a video production chain. The main thing to observe is that the curve is in fact pretty close to a straight line (actual gammacurves are very gentle, slight curves). This is important as what that means is that when the filmed scene gets twice as bright the output shown on the display also appears twice as bright, so the image we see on the display looks natural and normal. This is the type of gamma curve that would often be referred to as a standard gamma and it is very much what you see is what you get. In reality there are small variations of these standard gammacurves designed to suit different television standards, but those slight variations only make a small difference to the final viewed image. Standard gammas are typically restricted to around a 7 stop exposure range. These days this limited range is not so much to do with the lattitude of the camera but by the inability of most monitors and TV display systems to accurately reproduce more than a 7 stop range and to ensure that all viewers whether they have 20 year old TV or an ultra modern display get a sensible looking picture. This means that we have a problem. Modern cameras can capture great brightness ranges, helping the video maker or cinematographer capture high contrast scenes, but simply taking a 12 stop scene and showing it on a 7 stop display isn’t going to work. This is where modified gammacurves come in to play.
The second graph here shows a modified type of gamma curve. This is similar to the hypergamma or cinegamma curves found on many professional camcorders. What does the graph tell us? Well first of all we can see that the range of brightness or lattitude is greater as the curve extends out towards a range of 10 T stops compared to the 7 stops the standard gamma offers. Each additional stop is a doubling of lattitude. This means that a camera set up with this type of gamma curve can capture a far greater contrast range, but it’s not quite as simple as that.
Un-natural image response area
Look at the area shaded red on the graph. This is the area where the cameras capture gamma curve deviates from the standard gamma curve used not just for image capture but also for image display. What this means is that the area of the image shaded in red will not look natural because where something in that part of the filmed scene gets 100% brighter it will only be displayed as getting 50% brighter for example. In practice what this means is that while you are capturing a greater brightness range you will also need to grade or correct this range somewhat in the post production process to make the image look natural. Generally scenes shot using hypergammas or cinegammas can look a little washed out or flat. Cinegammas and Hypergammas keep the important central exposure range nice an linear, so the region from black up to around 75% is much like a standard gamma curve, so faces, skin, flora and fauna tend to have a natural contrast range, it is only really highlights such as the sky that is getting compressed and we don’t tend to notice this much in the end picture. This is because our visual system is very good at discerning fine detail in shadow and mid tones but less accurate in highlights, so we tend not to find this high light compression objectionable.
S-Log Gamma Curve
Taking things a step further this even more extreme gamma curve is similar to Sony’s S-Log gamma curve. As you can see this deviates greatly from the standard gamma curve. Now the entire linear output of the sensor is sampled using a logarithmic scale. This allows more of the data to be allocated to the shadows and midtones where the eye is most sensitive. The end result is a huge improvement in the recorded dynamic range (greater than 12 stops) combined with less data being used for highlights and more being used where it counts. However, the image when viewed on a standard monitor with no correction that looks very washed out, lacks contrast and generally looks incredibly flat and uninteresting.
Red area indicates where image will not look natural with S-Log without LUT
In fact the uncorrected image is so flat and washed out that it can make judging the optimum exposure difficult and crews using S-Log will often use traditional light meters to set the exposure rather than a monitor or rely on zebras and known references such as grey cards. For on set monitoring with S-Log you need to apply a LUT (look Up Table) to the cameras output. A LUT is in effect a reverse gamma curve that cancels out the S-Log curve so that the image you see on the monitor is closer to a standard gamma image or your desired final pictures. The problem with this though is that the monitor is now no longer showing the full contrast range being captured and recorded so accurate exposure assessment can be tricky as you may want to bias your exposure range towards light or dark depending on how you will grade the final production. In addition because you absolutely must adjust the image in post production quite heavily to get an acceptable and pleasing image it is vital that the recording method is up to the job. Highly compressed 8 bit codecs are not good enough for S-Log. That’s why S-Log is normally recorded using 10 bit 4:4:4 with very low compression ratios. Any compression artefacts can become exaggerated when the image is manipulated and pushed and pulled in the grade to give a pleasing image. You could use 4:2:2 10 bit at a push, but the chroma sub sampling may lead to banding in highly saturated areas, really Hypergammas and Cinegammas are better suited to 4:2:2 and S-Log is best reserved for 4:4:4.
The graph to the left shows and idealised, normal gamma curve for a video production chain. The main thing to observe is that the curve is in fact pretty close to a straight line (actual gammacurves are very gentle, slight curves). This is important as what that means is that when the filmed scene gets twice as bright the output shown on the display also appears twice as bright, so the image we see on the display looks natural and normal. This is the type of gamma curve that would often be referred to as a standard gamma and it is very much what you see is what you get. In reality there are small variations of these standard gammacurves designed to suit different television standards, but those slight variations only make a small difference to the final viewed image. Standard gammas are typically restricted to around a 7 stop exposure range. These days this limited range is not so much to do with the lattitude of the camera but by the inability of most monitors and TV display systems to accurately reproduce more than a 7 stop range and to ensure that all viewers whether they have 20 year old TV or an ultra modern display get a sensible looking picture. This means that we have a problem. Modern cameras can capture great brightness ranges, helping the video maker or cinematographer capture high contrast scenes, but simply taking a 12 stop scene and showing it on a 7 stop display isn’t going to work. This is where modified gammacurves come in to play.
The second graph here shows a modified type of gamma curve. This is similar to the hypergamma or cinegamma curves found on many professional camcorders. What does the graph tell us? Well first of all we can see that the range of brightness or lattitude is greater as the curve extends out towards a range of 10 T stops compared to the 7 stops the standard gamma offers. Each additional stop is a doubling of lattitude. This means that a camera set up with this type of gamma curve can capture a far greater contrast range, but it’s not quite as simple as that.
Un-natural image response area
Look at the area shaded red on the graph. This is the area where the cameras capture gamma curve deviates from the standard gamma curve used not just for image capture but also for image display. What this means is that the area of the image shaded in red will not look natural because where something in that part of the filmed scene gets 100% brighter it will only be displayed as getting 50% brighter for example. In practice what this means is that while you are capturing a greater brightness range you will also need to grade or correct this range somewhat in the post production process to make the image look natural. Generally scenes shot using hypergammas or cinegammas can look a little washed out or flat. Cinegammas and Hypergammas keep the important central exposure range nice an linear, so the region from black up to around 75% is much like a standard gamma curve, so faces, skin, flora and fauna tend to have a natural contrast range, it is only really highlights such as the sky that is getting compressed and we don’t tend to notice this much in the end picture. This is because our visual system is very good at discerning fine detail in shadow and mid tones but less accurate in highlights, so we tend not to find this high light compression objectionable.
S-Log Gamma Curve
Taking things a step further this even more extreme gamma curve is similar to Sony’s S-Log gamma curve. As you can see this deviates greatly from the standard gamma curve. Now the entire linear output of the sensor is sampled using a logarithmic scale. This allows more of the data to be allocated to the shadows and midtones where the eye is most sensitive. The end result is a huge improvement in the recorded dynamic range (greater than 12 stops) combined with less data being used for highlights and more being used where it counts. However, the image when viewed on a standard monitor with no correction that looks very washed out, lacks contrast and generally looks incredibly flat and uninteresting.
Red area indicates where image will not look natural with S-Log without LUT
In fact the uncorrected image is so flat and washed out that it can make judging the optimum exposure difficult and crews using S-Log will often use traditional light meters to set the exposure rather than a monitor or rely on zebras and known references such as grey cards. For on set monitoring with S-Log you need to apply a LUT (look Up Table) to the cameras output. A LUT is in effect a reverse gamma curve that cancels out the S-Log curve so that the image you see on the monitor is closer to a standard gamma image or your desired final pictures. The problem with this though is that the monitor is now no longer showing the full contrast range being captured and recorded so accurate exposure assessment can be tricky as you may want to bias your exposure range towards light or dark depending on how you will grade the final production. In addition because you absolutely must adjust the image in post production quite heavily to get an acceptable and pleasing image it is vital that the recording method is up to the job. Highly compressed 8 bit codecs are not good enough for S-Log. That’s why S-Log is normally recorded using 10 bit 4:4:4 with very low compression ratios. Any compression artefacts can become exaggerated when the image is manipulated and pushed and pulled in the grade to give a pleasing image. You could use 4:2:2 10 bit at a push, but the chroma sub sampling may lead to banding in highly saturated areas, really Hypergammas and Cinegammas are better suited to 4:2:2 and S-Log is best reserved for 4:4:4.
Cinematographer and film maker Alister Chapman's Personal Website